• Title/Summary/Keyword: 구성 알고리즘

Search Result 6,099, Processing Time 0.048 seconds

Overview of Research Trends in Estimation of Forest Carbon Stocks Based on Remote Sensing and GIS (원격탐사와 GIS 기반의 산림탄소저장량 추정에 관한 주요국 연구동향 개관)

  • Kim, Kyoung-Min;Lee, Jung-Bin;Kim, Eun-Sook;Park, Hyun-Ju;Roh, Young-Hee;Lee, Seung-Ho;Park, Key-Ho;Shin, Hyu-Seok
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.14 no.3
    • /
    • pp.236-256
    • /
    • 2011
  • Forest carbon stocks change due to land use change is an important data required by UNFCCC(United Nations framework convention on climate change). Spatially explicit estimation of forest carbon stocks based on IPCC GPG(intergovernmental panel on climate change good practice guidance) tier 3 gives high reliability. But a current estimation which was aggregated from NFI data doesn't have detail forest carbon stocks by polygon or cell. In order to improve an estimation remote sensing and GIS have been used especially in Europe and North America. We divided research trends in main countries into 4 categories such as remote sensing, GIS, geostatistics and environmental modeling considering spatial heterogeneity. The easiest way to apply is combination NFI data with forest type map based on GIS. Considering especially complicated forest structure of Korea, geostatistics is useful to estimate local variation of forest carbon. In addition, fine scale image is good for verification of forest carbon stocks and determination of CDM site. Related domestic researches are still on initial status and forest carbon stocks are mainly estimated using k-nearest neighbor(k-NN). In order to select suitable method for forest in Korea, an applicability of diverse spatial data and algorithm must be considered. Also the comparison between methods is required.

An Implementation of Lighting Control System using Interpretation of Context Conflict based on Priority (우선순위 기반의 상황충돌 해석 조명제어시스템 구현)

  • Seo, Won-Il;Kwon, Sook-Youn;Lim, Jae-Hyun
    • Journal of Internet Computing and Services
    • /
    • v.17 no.1
    • /
    • pp.23-33
    • /
    • 2016
  • The current smart lighting is shaped to offer the lighting environment suitable for current context, after identifying user's action and location through a sensor. The sensor-based context awareness technology just considers a single user, and the studies to interpret many users' various context occurrences and conflicts lack. In existing studies, a fuzzy theory and algorithm including ReBa have been used as the methodology to solve context conflict. The fuzzy theory and algorithm including ReBa just avoid an opportunity of context conflict that may occur by providing services by each area, after the spaces where users are located are classified into many areas. Therefore, they actually cannot be regarded as customized service type that can offer personal preference-based context conflict. This paper proposes a priority-based LED lighting control system interpreting multiple context conflicts, which decides services, based on the granted priority according to context type, when service conflict is faced with, due to simultaneous occurrence of various contexts to many users. This study classifies the residential environment into such five areas as living room, 'bed room, study room, kitchen and bath room, and the contexts that may occur within each area are defined as 20 contexts such as exercising, doing makeup, reading, dining and entering, targeting several users. The proposed system defines various contexts of users using an ontology-based model and gives service of user oriented lighting environment through rule based on standard and context reasoning engine. To solve the issue of various context conflicts among users in the same space and at the same time point, the context in which user concentration is required is set in the highest priority. Also, visual comfort is offered as the best alternative priority in the case of the same priority. In this manner, they are utilized as the criteria for service selection upon conflict occurrence.

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

Koreanized Analysis System Development for Groundwater Flow Interpretation (지하수유동해석을 위한 한국형 분석시스템의 개발)

  • Choi, Yun-Yeong
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.3 no.3 s.10
    • /
    • pp.151-163
    • /
    • 2003
  • In this study, the algorithm of groundwater flow process was established for koreanized groundwater program development dealing with the geographic and geologic conditions of the aquifer have dynamic behaviour in groundwater flow system. All the input data settings of the 3-DFM model which is developed in this study are organized in Korean, and the model contains help function for each input data. Thus, it is designed to get detailed information about each input parameter when the mouse pointer is placed on the corresponding input parameter. This model also is designed to easily specify the geologic boundary condition for each stratum or initial head data in the work sheet. In addition, this model is designed to display boxes for input parameter writing for each analysis condition so that the setting for each parameter is not so complicated as existing MODFLOW is when steady and unsteady flow analysis are performed as well as the analysis for the characteristics of each stratum. Descriptions for input data are displayed on the right side of the window while the analysis results are displayed on the left side as well as the TXT file for this results is available to see. The model developed in this study is a numerical model using finite differential method, and the applicability of the model was examined by comparing and analyzing observed and simulated groundwater heads computed by the application of real recharge amount and the estimation of parameters. The 3-DFM model is applied in this study to Sehwa-ri, and Songdang-ri area, Jeju, Korea for analysis of groundwater flow system according to pumping, and obtained the results that the observed and computed groundwater head were almost in accordance with each other showing the range of 0.03 - 0.07 error percent. It is analyzed that the groundwater flow distributed evenly from Nopen-orum and Munseogi-orum to Wolang-bong, Yongnuni-orum, and Songja-bong through the computation of equipotentials and velocity vector using the analysis result of simulation which was performed before the pumping started in the study area. These analysis results show the accordance with MODFLOW's.

A Scalable and Modular Approach to Understanding of Real-time Software: An Architecture-based Software Understanding(ARSU) and the Software Re/reverse-engineering Environment(SRE) (실시간 소프트웨어의 조절적${\cdot}$단위적 이해 방법 : ARSU(Architecture-based Software Understanding)와 SRE(Software Re/reverse-engineering Environment))

  • Lee, Moon-Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3159-3174
    • /
    • 1997
  • This paper reports a research to develop a methodology and a tool for understanding of very large and complex real-time software. The methodology and the tool mostly developed by the author are called the Architecture-based Real-time Software Understanding (ARSU) and the Software Re/reverse-engineering Environment (SRE) respectively. Due to size and complexity, it is commonly very hard to understand the software during reengineering process. However the research facilitates scalable re/reverse-engineering of such real-time software based on the architecture of the software in three-dimensional perspectives: structural, functional, and behavioral views. Firstly, the structural view reveals the overall architecture, specification (outline), and the algorithm (detail) views of the software, based on hierarchically organized parent-chi1d relationship. The basic building block of the architecture is a software Unit (SWU), generated by user-defined criteria. The architecture facilitates navigation of the software in top-down or bottom-up way. It captures the specification and algorithm views at different levels of abstraction. It also shows the functional and the behavioral information at these levels. Secondly, the functional view includes graphs of data/control flow, input/output, definition/use, variable/reference, etc. Each feature of the view contains different kind of functionality of the software. Thirdly, the behavioral view includes state diagrams, interleaved event lists, etc. This view shows the dynamic properties or the software at runtime. Beside these views, there are a number of other documents: capabilities, interfaces, comments, code, etc. One of the most powerful characteristics of this approach is the capability of abstracting and exploding these dimensional information in the architecture through navigation. These capabilities establish the foundation for scalable and modular understanding of the software. This approach allows engineers to extract reusable components from the software during reengineering process.

  • PDF

The Optimal Configuration of Arch Structures Using Force Approximate Method (부재력(部材力) 근사해법(近似解法)을 이용(利用)한 아치구조물(構造物)의 형상최적화(形狀最適化)에 관한 연구(研究))

  • Lee, Gyu Won;Ro, Min Lae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.13 no.2
    • /
    • pp.95-109
    • /
    • 1993
  • In this study, the optimal configuration of arch structure has been tested by a decomposition technique. The object of this study is to provide the method of optimizing the shapes of both two hinged and fixed arches. The problem of optimal configuration of arch structures includes the interaction formulas, the working stress, and the buckling stress constraints on the assumption that arch ribs can be approximated by a finite number of straight members. On the first level, buckling loads are calculated from the relation of the stiffness matrix and the geometric stiffness matrix by using Rayleigh-Ritz method, and the number of the structural analyses can be decreased by approximating member forces through sensitivity analysis using the design space approach. The objective function is formulated as the total weight of the structures, and the constraints are derived by including the working stress, the buckling stress, and the side limit. On the second level, the nodal point coordinates of the arch structures are used as design variables and the objective function has been taken as the weight function. By treating the nodal point coordinates as design variable, the problem of optimization can be reduced to unconstrained optimal design problem which is easy to solve. Numerical comparisons with results which are obtained from numerical tests for several arch structures with various shapes and constraints show that convergence rate is very fast regardless of constraint types and configuration of arch structures. And the optimal configuration or the arch structures obtained in this study is almost the identical one from other results. The total weight could be decreased by 17.7%-91.7% when an optimal configuration is accomplished.

  • PDF

Advanced Improvement for Frequent Pattern Mining using Bit-Clustering (비트 클러스터링을 이용한 빈발 패턴 탐사의 성능 개선 방안)

  • Kim, Eui-Chan;Kim, Kye-Hyun;Lee, Chul-Yong;Park, Eun-Ji
    • Journal of Korea Spatial Information System Society
    • /
    • v.9 no.1
    • /
    • pp.105-115
    • /
    • 2007
  • Data mining extracts interesting knowledge from a large database. Among numerous data mining techniques, research work is primarily concentrated on clustering and association rules. The clustering technique of the active research topics mainly deals with analyzing spatial and attribute data. And, the technique of association rules deals with identifying frequent patterns. There was an advanced apriori algorithm using an existing bit-clustering algorithm. In an effort to identify an alternative algorithm to improve apriori, we investigated FP-Growth and discussed the possibility of adopting bit-clustering as the alternative method to solve the problems with FP-Growth. FP-Growth using bit-clustering demonstrated better performance than the existing method. We used chess data in our experiments. Chess data were used in the pattern mining evaluation. We made a creation of FP-Tree with different minimum support values. In the case of high minimum support values, similar results that the existing techniques demonstrated were obtained. In other cases, however, the performance of the technique proposed in this paper showed better results in comparison with the existing technique. As a result, the technique proposed in this paper was considered to lead to higher performance. In addition, the method to apply bit-clustering to GML data was proposed.

  • PDF

A Study of Textured Image Segmentation using Phase Information (페이즈 정보를 이용한 텍스처 영상 분할 연구)

  • Oh, Suk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.2
    • /
    • pp.249-256
    • /
    • 2011
  • Finding a new set of features representing textured images is one of the most important studies in textured image analysis. This is because it is impossible to construct a perfect set of features representing every textured image, and it is inevitable to choose some relevant features which are efficient to on-going image processing jobs. This paper intends to find relevant features which are efficient to textured image segmentation. In this regards, this paper presents a different method for the segmentation of textured images based on the Gabor filter. Gabor filter is known to be a very efficient and effective tool which represents human visual system for texture analysis. Filtering a real-valued input image by the Gabor filter results in complex-valued output data defined in the spatial frequency domain. This complex value, as usual, gives the module and the phase. This paper focused its attention on the phase information, rather than the module information. In fact, the module information is considered very useful at region analysis in texture, while the phase information was considered almost of no use. But this paper shows that the phase information can also be fully useful and effective at region analysis in texture, once a good method introduced. We now propose "phase derivated method", which is an efficient and effective way to compute the useful phase information directly from the filtered value. This new method reduces effectively computing burden and widen applicable textured images.

OD matrix estimation using link use proportion sample data as additional information (표본링크이용비를 추가정보로 이용한 OD 행렬 추정)

  • 백승걸;김현명;신동호
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.4
    • /
    • pp.83-93
    • /
    • 2002
  • To improve the performance of estimation, the research that uses additional information addition to traffic count and target OD with additional survey cost have been studied. The purpose of this paper is to improve the performance of OD estimation by reducing the feasible solutions with cost-efficiently additional information addition to traffic counts and target OD. For this purpose, we Propose the OD estimation method with sample link use proportion as additional information. That is, we obtain the relationship between OD trip and link flow from sample link use proportion that is high reliable information with roadside survey, not from the traffic assignment of target OD. Therefore, this paper proposes OD estimation algorithm in which the conservation of link flow rule under the path-based non-equilibrium traffic assignment concept. Numerical result with test network shows that it is possible to improve the performance of OD estimation where the precision of additional data is low, since sample link use Proportion represented the information showing the relationship between OD trip and link flow. And this method shows the robust performance of estimation where traffic count or OD trip be changed, since this method did not largely affected by the error of target OD and the one of traffic count. In addition to, we also propose that we must set the level of data precision by considering the level of other information precision, because "precision problem between information" is generated when we use additional information like sample link use proportion etc. And we Propose that the method using traffic count as basic information must obtain the link flow to certain level in order to high the applicability of additional information. Finally, we propose that additional information on link have a optimal counting location problem. Expecially by Precision of information side it is possible that optimal survey location problem of sample link use proportion have a much impact on the performance of OD estimation rather than optimal counting location problem of link flow.

A Study on The RFID/WSN Integrated system for Ubiquitous Computing Environment (유비쿼터스 컴퓨팅 환경을 위한 RFID/WSN 통합 관리 시스템에 관한 연구)

  • Park, Yong-Min;Lee, Jun-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.49 no.1
    • /
    • pp.31-46
    • /
    • 2012
  • The most critical technology to implement ubiquitous health care is Ubiquitous Sensor Network (USN) technology which makes use of various sensor technologies, processor integration technology, and wireless network technology-Radio Frequency Identification (RFID) and Wireless Sensor Network (WSN)-to easily gather and monitor actual physical environment information from a remote site. With the feature, the USN technology can make the information technology of the existing virtual space expanded to actual environments. However, although the RFID and the WSN have technical similarities and mutual effects, they have been recognized to be studied separately, and sufficient studies have not been conducted on the technical integration of the RFID and the WSN. Therefore, EPCglobal which realized the issue proposed the EPC Sensor Network to efficiently integrate and interoperate the RFID and WSN technologies based on the international standard EPCglobal network. The proposed EPC Sensor Network technology uses the Complex Event Processing method in the middleware to integrate data occurring through the RFID and the WSN in a single environment and to interoperate the events based on the EPCglobal network. However, as the EPC Sensor Network technology continuously performs its operation even in the case that the minimum conditions are not to be met to find complex events in the middleware, its operation cost rises. Moreover, since the technology is based on the EPCglobal network, it can neither perform its operation only for the sake of sensor data, nor connect or interoperate with each information system in which the most important information in the ubiquitous computing environment is saved. Therefore, to address the problems of the existing system, we proposed the design and implementation of USN integration management system. For this, we first proposed an integration system that manages RFID and WSN data based on Session Initiation Protocol (SIP). Secondly, we defined the minimum conditions of the complex events to detect unnecessary complex events in the middleware, and proposed an algorithm that can extract complex events only when the minimum conditions are to be met. To evaluate the performance of the proposed methods we implemented SIP-based integration management system.