Image matching is a crucial preprocessing step for effective utilization of multi-temporal and multi-sensor very high resolution (VHR) satellite images. Deep learning (DL) method which is attracting widespread interest has proven to be an efficient approach to measure the similarity between image pairs in quick and accurate manner by extracting complex and detailed features from satellite images. However, Image matching of VHR satellite images remains challenging due to limitations of DL models in which the results are depending on the quantity and quality of training dataset, as well as the difficulty of creating training dataset with VHR satellite images. Therefore, this study examines the feasibility of DL-based method in matching pair extraction which is the most time-consuming process during image registration. This paper also aims to analyze factors that affect the accuracy based on the configuration of training dataset, when developing training dataset from existing multi-sensor VHR image database with bias for DL-based image matching. For this purpose, the generated training dataset were composed of correct matching pairs and incorrect matching pairs by assigning true and false labels to image pairs extracted using a grid-based Scale Invariant Feature Transform (SIFT) algorithm for a total of 12 multi-temporal and multi-sensor VHR images. The Siamese convolutional neural network (SCNN), proposed for matching pair extraction on constructed training dataset, proceeds with model learning and measures similarities by passing two images in parallel to the two identical convolutional neural network structures. The results from this study confirm that data acquired from VHR satellite image database can be used as DL training dataset and indicate the potential to improve efficiency of the matching process by appropriate configuration of multi-sensor images. DL-based image matching techniques using multi-sensor VHR satellite images are expected to replace existing manual-based feature extraction methods based on its stable performance, thus further develop into an integrated DL-based image registration framework.
Kim, Ji-Young;Lee, Hae-Yeoun;Im, Dong-Hyuck;Ryu, Seung-Jin;Choi, Jung-Ho;Lee, Heung-Kyu
The KIPS Transactions:PartB
/
v.15B
no.4
/
pp.307-314
/
2008
This paper presents a new robust watermarking method for curves that uses informed-detection. To embed watermarks, the presented algorithm parameterizes a curve using the B-spline model and acquires the control points of the B-spline model. For these control points, 2D mesh are created by applying Delaunay triangulation and then the mesh spectral analysis is performed to calculate the mesh spectral coefficients where watermark messages are embedded in a spread spectrum way. The watermarked coefficients are inversely transformed to the coordinates of the control points and the watermarked curve is reconstructed by calculating B-spline model with the control points. To detect the embedded watermark, we apply curve matching algorithm using inflection points of curve. After curve registration, we calculate the difference between the original and watermarked mesh spectral coefficients with the same process for embedding. By calculating correlation coefficients between the detected and candidate watermark, we decide which watermark was embedded. The experimental results prove the proposed scheme is more robust than previous watermarking schemes against print-scan process as well as geometrical distortions.
Journal of the Korean Association of Geographic Information Studies
/
v.9
no.4
/
pp.34-44
/
2006
Simulation models allow researchers to model large hydrological catchment for comprehensive management of the water resources and explication of the diffuse pollution processes, such as land-use changes by development plan of the region. Recently, there have been reported many researches that examine water body quality using Geographic Information System (GIS) and dynamic watershed models such as AGNPS, HSPF, SWAT that necessitate handling large amounts of data. The aim of this study is to develop a watershed based water quality estimation system for the impact assessment on stream water quality. KBASIN-HSPF, proposed in this study, provides easy data compiling for HSPF by facilitating the setup and simulation process. It also assists the spatial interpretation of point and non-point pollutant information and thiessen rainfall creation and pre and post processing for large environmental data An integration methodology of GIS and water quality model for the preprocessing geo-morphologic data was designed by coupling the data model KBASIN-HSPF interface comprises four modules: registration and modification of basic environmental information, watershed delineation generator, watershed geo-morphologic index calculator and model input file processor. KBASIN-HSPF was applied to simulate the water quality impact by variation of subbasin pollution discharge structure.
Proceedings of the Korea Water Resources Association Conference
/
2008.05a
/
pp.194-198
/
2008
Significant soil erosion and water quality degradation issues are occurring at highland agricultural areas of Kangwon province because of agronomic and topographical specialities of the region. Thus spatial and temporal modeling techniques are often utilized to analyze soil erosion and sediment behaviors at watershed scale. The Soil and Water Assessment Tool (SWAT) model is one of the watershed scale models that have been widely used for these ends in Korea. In most cases, the SWAT users tend to use the readily available input dataset, such as the Ministry of Environment (MOE) land cover data ignoring temporal and spatial changes in land cover. Spatial and temporal resolutions of the MOE land cover data are not good enough to reflect field condition for accurate assesment of soil erosion and sediment behaviors. Especially accelerated soil erosion is occurring from agricultural fields, which is sometimes not possible to identify with low-resolution MOD land cover data. Thus new land cover data is prepared with cadastral map and high spatial resolution images of the Doam-dam watershed. The SWAT model was calibrated and validated with this land cover data. The EI values were 0.79 and 0.85 for streamflow calibration and validation, respectively. The EI were 0.79 and 0.86 for sediment calibration and validation, respectively. These EI values were greater than those with MOE land cover data. With newly prepared land cover dataset for the Doam-dam watershed, the SWAT model better predicts hydrologic and sediment behaviors. The number of HRUs with new land cover data increased by 70.2% compared with that with the MOE land cover, indicating better representation of small-sized agricultural field boundaries. The SWAT estimated annual average sediment yield with the MOE land cover data was 61.8 ton/ha/year for the Doam-dam watershed, while 36.2 ton/ha/year (70.7% difference) of annual sediment yield with new land cover data. Especially the most significant difference in estimated sediment yield was 548.0% for the subwatershed #2 (165.9 ton/ha/year with the MOE land cover data and 25.6 ton/ha/year with new land cover data developed in this study). The results obtained in this study implies that the use of MOE land cover data in SWAT sediment simulation for the Doam-dam watershed could results in 70.7% differences in overall sediment estimation and incorrect identification of sediment hot spot areas (such as subwatershed #2) for effective sediment management. Therefore it is recommended that one needs to carefully validate land cover for the study watershed for accurate hydrologic and sediment simulation with the SWAT model.
The Journal of Korean Institute of Communications and Information Sciences
/
v.35
no.3B
/
pp.421-430
/
2010
Now a days, as a communications network is being broadband, IPTV(Internet Protocol Television) service which provides various two-way TV service is increasing. But as the data which is transmitted between IPTV set-top box and smart card is almost transmitted to set-top box, the illegal user who gets legal authority by approaching to the context of contents illegally using McComac Hack Attack is not prevented perfectly. In this paper, set-top box access security model is proposed which is for the protection from McComac Hack Attack that tries to get permission for access of IPTV service illegally making data line which is connected from smart card to set-top box by using same kind of other set-top box which illegal user uses. The proposed model reports the result of test which tests the user who wants to get permission illegally by registration the information of a condition of smart card which is usable in set-top box in certification server so that it prevents illegal user. Specially, the proposed model strengthen the security about set-top box by adapting public key which is used for establishing neighbor link and inter-certification process though secret value and random number which is created by Pseudo random function.
The computing systems of today expanded business trade and distributed business process Internet. More and more systems are developed from components with exactly reusability, independency, and portability. Component based development is focused on advanced concepts rater than passive manipulation or source code in class library. The primary component construction in CBD. However, lead to an additional cost for reconstructing the new component with CBD model. It also difficult to serve component information with rapidly and exactly, which normalization model are not established, frequency user logging in Web caused overload. A lot of difficult issues and aspects of Component Based Development have to be investigated to develop good component-based products. There is no established normalization model which will guarantee a proper treatment of components. This paper elaborates on some of those aspects of web application to adapt user requirement with exactly and rapidly. Distributed components in this paper are used in the most tiny size on network and suggest the network-addressable interface based on business domain. We also discuss the internal and external specifications for grasping component internal and external relations of user requirements to be analyzed. The specifications are stored on Servlets after dividing the information between session and entity as an EJB (Enterprise JavaBeans) that are reusable unit size in business domain. The reusable units are used in business component through query to get business component. As a major contribution, we propose a systems model for registration, auto-arrange, search, test, and download component, which covers component reusability and component customization.
Online transactions are more familiar in various fields due to the development of the ICT and the increase in trading platforms. In particular, the amount of transactions is increasing due to the increase in used transaction platforms and users, and reliability is very important due to the nature of used transactions. Among them, the used car market is very active because automobiles are operated over a long period of time. However, used car transactions are a representative market to which information asymmetry is applied. In this paper presents a DID-based transaction model that guarantees reliability to solve problems with false advertisements and false sales in used car transactions. In the used car transaction model, sellers only register data issued by the issuing agency to prevent false sales at the time of initial sales registration. It is authenticated with DID Auth in the issuance process, it is safe from attacks such as sniping and middleman attacks. In the presented transaction model, integrity is verified with VP's Proof item to increase reliability and solve information asymmetry. Also, through direct transactions between buyers and sellers, there is no third-party intervention, which has the effect of reducing fees.
Remotely sensed data have been used in various fields, such as disasters, agriculture, urban planning, and the military. Recently, the demand for the multitemporal dataset with the high-spatial-resolution has increased. This manuscript proposed an automatic image matching algorithm using a deep learning technique to utilize a multitemporal remotely sensed dataset. The proposed deep learning model was based on High Resolution Net (HRNet), widely used in image segmentation. In this manuscript, denseblock was added to calculate the correlation map between images effectively and to increase learning efficiency. The training of the proposed model was performed using the multitemporal orthophotos of the National Geographic Information Institute (NGII). In order to evaluate the performance of image matching using a deep learning model, a comparative evaluation was performed. As a result of the experiment, the average horizontal error of the proposed algorithm based on 80% of the image matching rate was 3 pixels. At the same time, that of the Zero Normalized Cross-Correlation (ZNCC) was 25 pixels. In particular, it was confirmed that the proposed method is effective even in mountainous and farmland areas where the image changes according to vegetation growth. Therefore, it is expected that the proposed deep learning algorithm can perform relative image registration and image matching of a multitemporal remote sensed dataset.
Journal of the Korean Regional Science Association
/
v.36
no.1
/
pp.51-67
/
2020
The purpose of this study is to estimate the housing area per capita for verifying if the public Big Data, of the building ledger and resident registration ledger, can be used as well as the National Census and Housing Survey. The Mankiw and Weil (MW) model was constructed by extracting samples of general detached houses and flat houses from the public big data, and compared with the result from traditional survey method. Then, the MW models of 25 municipalities in Seoul was established. As a result, it can be confirmed that it is possible to establish MW models comparable to regular surveys using public big data, and to establish a model for each basic localities which was difficult to use as a regular survey method. Public Big Data has the advantage of expanding the knowledge frontier, but there are some limitations because it uses data generated for other original purposes. Also, the difficult process of accessing personal information is a burden to carry out analysis. It is expected that continuing research should be needed on how public Big Data would be processed to complement or replace traditional statistical surveys.
Korean Journal of Construction Engineering and Management
/
v.16
no.2
/
pp.65-76
/
2015
The Clean Development Mechanism (CDM) is the multi-lateral 'cap and trade' system endorsed by the Kyoto Protocol. CDM allows developed (Annex I) countries to buy CER credits from New and Renewable (NE) projects of non-Annex countries, to meet their carbon reduction requirements. This in effect subsidizes and promotes NE projects in developing countries, ultimately reducing global greenhouse gases (GHG). To be registered as a CDM project, the project must prove 'additionality,' which depends on numerous factors including the adopted technology, baseline methodology, emission reductions, and the project's internal rate of return. This makes it difficult to determine ex ante a project's acceptance as a CDM approved project, and entails sunk costs and even project cancellation to its project stakeholders. Focusing on hydro power projects and employing UNFCCC public data, this research developed a prediction model using logistic regression and CART to determine the likelihood of approval as a CDM project. The AUC for the logistic regression and CART model was 0.7674 and 0.7231 respectively, which proves the model's prediction accuracy. More importantly, results indicate that the emission reduction amount, MW per hour, investment/Emission as crucial variables, whereas the baseline methodology and technology types were insignificant. This demonstrates that at least for hydro power projects, the specific technology is not as important as the amount of emission reductions and relatively small scale projects and investment to carbon reduction ratios.
본 웹사이트에 게시된 이메일 주소가 전자우편 수집 프로그램이나
그 밖의 기술적 장치를 이용하여 무단으로 수집되는 것을 거부하며,
이를 위반시 정보통신망법에 의해 형사 처벌됨을 유념하시기 바랍니다.
[게시일 2004년 10월 1일]
이용약관
제 1 장 총칙
제 1 조 (목적)
이 이용약관은 KoreaScience 홈페이지(이하 “당 사이트”)에서 제공하는 인터넷 서비스(이하 '서비스')의 가입조건 및 이용에 관한 제반 사항과 기타 필요한 사항을 구체적으로 규정함을 목적으로 합니다.
제 2 조 (용어의 정의)
① "이용자"라 함은 당 사이트에 접속하여 이 약관에 따라 당 사이트가 제공하는 서비스를 받는 회원 및 비회원을
말합니다.
② "회원"이라 함은 서비스를 이용하기 위하여 당 사이트에 개인정보를 제공하여 아이디(ID)와 비밀번호를 부여
받은 자를 말합니다.
③ "회원 아이디(ID)"라 함은 회원의 식별 및 서비스 이용을 위하여 자신이 선정한 문자 및 숫자의 조합을
말합니다.
④ "비밀번호(패스워드)"라 함은 회원이 자신의 비밀보호를 위하여 선정한 문자 및 숫자의 조합을 말합니다.
제 3 조 (이용약관의 효력 및 변경)
① 이 약관은 당 사이트에 게시하거나 기타의 방법으로 회원에게 공지함으로써 효력이 발생합니다.
② 당 사이트는 이 약관을 개정할 경우에 적용일자 및 개정사유를 명시하여 현행 약관과 함께 당 사이트의
초기화면에 그 적용일자 7일 이전부터 적용일자 전일까지 공지합니다. 다만, 회원에게 불리하게 약관내용을
변경하는 경우에는 최소한 30일 이상의 사전 유예기간을 두고 공지합니다. 이 경우 당 사이트는 개정 전
내용과 개정 후 내용을 명확하게 비교하여 이용자가 알기 쉽도록 표시합니다.
제 4 조(약관 외 준칙)
① 이 약관은 당 사이트가 제공하는 서비스에 관한 이용안내와 함께 적용됩니다.
② 이 약관에 명시되지 아니한 사항은 관계법령의 규정이 적용됩니다.
제 2 장 이용계약의 체결
제 5 조 (이용계약의 성립 등)
① 이용계약은 이용고객이 당 사이트가 정한 약관에 「동의합니다」를 선택하고, 당 사이트가 정한
온라인신청양식을 작성하여 서비스 이용을 신청한 후, 당 사이트가 이를 승낙함으로써 성립합니다.
② 제1항의 승낙은 당 사이트가 제공하는 과학기술정보검색, 맞춤정보, 서지정보 등 다른 서비스의 이용승낙을
포함합니다.
제 6 조 (회원가입)
서비스를 이용하고자 하는 고객은 당 사이트에서 정한 회원가입양식에 개인정보를 기재하여 가입을 하여야 합니다.
제 7 조 (개인정보의 보호 및 사용)
당 사이트는 관계법령이 정하는 바에 따라 회원 등록정보를 포함한 회원의 개인정보를 보호하기 위해 노력합니다. 회원 개인정보의 보호 및 사용에 대해서는 관련법령 및 당 사이트의 개인정보 보호정책이 적용됩니다.
제 8 조 (이용 신청의 승낙과 제한)
① 당 사이트는 제6조의 규정에 의한 이용신청고객에 대하여 서비스 이용을 승낙합니다.
② 당 사이트는 아래사항에 해당하는 경우에 대해서 승낙하지 아니 합니다.
- 이용계약 신청서의 내용을 허위로 기재한 경우
- 기타 규정한 제반사항을 위반하며 신청하는 경우
제 9 조 (회원 ID 부여 및 변경 등)
① 당 사이트는 이용고객에 대하여 약관에 정하는 바에 따라 자신이 선정한 회원 ID를 부여합니다.
② 회원 ID는 원칙적으로 변경이 불가하며 부득이한 사유로 인하여 변경 하고자 하는 경우에는 해당 ID를
해지하고 재가입해야 합니다.
③ 기타 회원 개인정보 관리 및 변경 등에 관한 사항은 서비스별 안내에 정하는 바에 의합니다.
제 3 장 계약 당사자의 의무
제 10 조 (KISTI의 의무)
① 당 사이트는 이용고객이 희망한 서비스 제공 개시일에 특별한 사정이 없는 한 서비스를 이용할 수 있도록
하여야 합니다.
② 당 사이트는 개인정보 보호를 위해 보안시스템을 구축하며 개인정보 보호정책을 공시하고 준수합니다.
③ 당 사이트는 회원으로부터 제기되는 의견이나 불만이 정당하다고 객관적으로 인정될 경우에는 적절한 절차를
거쳐 즉시 처리하여야 합니다. 다만, 즉시 처리가 곤란한 경우는 회원에게 그 사유와 처리일정을 통보하여야
합니다.
제 11 조 (회원의 의무)
① 이용자는 회원가입 신청 또는 회원정보 변경 시 실명으로 모든 사항을 사실에 근거하여 작성하여야 하며,
허위 또는 타인의 정보를 등록할 경우 일체의 권리를 주장할 수 없습니다.
② 당 사이트가 관계법령 및 개인정보 보호정책에 의거하여 그 책임을 지는 경우를 제외하고 회원에게 부여된
ID의 비밀번호 관리소홀, 부정사용에 의하여 발생하는 모든 결과에 대한 책임은 회원에게 있습니다.
③ 회원은 당 사이트 및 제 3자의 지적 재산권을 침해해서는 안 됩니다.
제 4 장 서비스의 이용
제 12 조 (서비스 이용 시간)
① 서비스 이용은 당 사이트의 업무상 또는 기술상 특별한 지장이 없는 한 연중무휴, 1일 24시간 운영을
원칙으로 합니다. 단, 당 사이트는 시스템 정기점검, 증설 및 교체를 위해 당 사이트가 정한 날이나 시간에
서비스를 일시 중단할 수 있으며, 예정되어 있는 작업으로 인한 서비스 일시중단은 당 사이트 홈페이지를
통해 사전에 공지합니다.
② 당 사이트는 서비스를 특정범위로 분할하여 각 범위별로 이용가능시간을 별도로 지정할 수 있습니다. 다만
이 경우 그 내용을 공지합니다.
제 13 조 (홈페이지 저작권)
① NDSL에서 제공하는 모든 저작물의 저작권은 원저작자에게 있으며, KISTI는 복제/배포/전송권을 확보하고
있습니다.
② NDSL에서 제공하는 콘텐츠를 상업적 및 기타 영리목적으로 복제/배포/전송할 경우 사전에 KISTI의 허락을
받아야 합니다.
③ NDSL에서 제공하는 콘텐츠를 보도, 비평, 교육, 연구 등을 위하여 정당한 범위 안에서 공정한 관행에
합치되게 인용할 수 있습니다.
④ NDSL에서 제공하는 콘텐츠를 무단 복제, 전송, 배포 기타 저작권법에 위반되는 방법으로 이용할 경우
저작권법 제136조에 따라 5년 이하의 징역 또는 5천만 원 이하의 벌금에 처해질 수 있습니다.
제 14 조 (유료서비스)
① 당 사이트 및 협력기관이 정한 유료서비스(원문복사 등)는 별도로 정해진 바에 따르며, 변경사항은 시행 전에
당 사이트 홈페이지를 통하여 회원에게 공지합니다.
② 유료서비스를 이용하려는 회원은 정해진 요금체계에 따라 요금을 납부해야 합니다.
제 5 장 계약 해지 및 이용 제한
제 15 조 (계약 해지)
회원이 이용계약을 해지하고자 하는 때에는 [가입해지] 메뉴를 이용해 직접 해지해야 합니다.
제 16 조 (서비스 이용제한)
① 당 사이트는 회원이 서비스 이용내용에 있어서 본 약관 제 11조 내용을 위반하거나, 다음 각 호에 해당하는
경우 서비스 이용을 제한할 수 있습니다.
- 2년 이상 서비스를 이용한 적이 없는 경우
- 기타 정상적인 서비스 운영에 방해가 될 경우
② 상기 이용제한 규정에 따라 서비스를 이용하는 회원에게 서비스 이용에 대하여 별도 공지 없이 서비스 이용의
일시정지, 이용계약 해지 할 수 있습니다.
제 17 조 (전자우편주소 수집 금지)
회원은 전자우편주소 추출기 등을 이용하여 전자우편주소를 수집 또는 제3자에게 제공할 수 없습니다.
제 6 장 손해배상 및 기타사항
제 18 조 (손해배상)
당 사이트는 무료로 제공되는 서비스와 관련하여 회원에게 어떠한 손해가 발생하더라도 당 사이트가 고의 또는 과실로 인한 손해발생을 제외하고는 이에 대하여 책임을 부담하지 아니합니다.
제 19 조 (관할 법원)
서비스 이용으로 발생한 분쟁에 대해 소송이 제기되는 경우 민사 소송법상의 관할 법원에 제기합니다.
[부 칙]
1. (시행일) 이 약관은 2016년 9월 5일부터 적용되며, 종전 약관은 본 약관으로 대체되며, 개정된 약관의 적용일 이전 가입자도 개정된 약관의 적용을 받습니다.