• Title/Summary/Keyword: improving accuracy

Search Result 1,558, Processing Time 0.031 seconds

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Validation of ENVI-met Model with In Situ Measurements Considering Spatial Characteristics of Land Use Types (토지이용 유형별 공간특성을 고려한 ENVI-met 모델의 현장측정자료 기반의 검증)

  • Song, Bong-Geun;Park, Kyung-Hun;Jung, Sung-Gwan
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.2
    • /
    • pp.156-172
    • /
    • 2014
  • This research measures and compares on-site net radiation energy, air temperature, wind speed, and surface temperature considering various spatial characteristics with a focus on land use types in urban areas in Changwon, Southern Gyeongsangnam-do, to analyze the accuracy of an ENVI-met model, which is an analysis program of microclimate. The on-site measurement was performed for three days in a mobile measurement: two days during the daytime and one day during the nighttime. The analysis using the ENVI-met model was also performed in the same time zone as the on-site measurement. The results indicated that the ENVI-met model showed higher net radiation than the on-site measurement by approximately $300Wm^{-2}$ during the daytime whereas the latter showed higher net radiation energy by approximately $200Wm^{-2}$ during the nighttime. The temperature was found to be much higher by approximately $2-6^{\circ}C$ in the on-site measurement during both the daytime and nighttime. The on-site measurement also showed higher surface temperature than the ENVI-met by approximately $7-13^{\circ}C$. In terms of the wind speed, there was a significant difference between the results of the ENVI-met model and on-site measurement. As for the correlation between the results of the ENVI-met model and on-site measurement, the temperature showed significantly high correlation whereas the correlations for the net radiation energy, surface temperature, and wind speed were very low. These results appear to be affected by excessive or under estimation of solar and terrestrial radiation and climatic conditions of the surrounding areas and characteristics of land cover. Hence, these factors should be considered when applying these findings in urban and environment planning for improving the microclimate in urban areas.

Relationship Analysis between Lineaments and Epicenters using Hotspot Analysis: The Case of Geochang Region, South Korea (핫스팟 분석을 통한 거창지역의 선구조선과 진앙의 상관관계 분석)

  • Jo, Hyun-Woo;Chi, Kwang-Hoon;Cha, Sungeun;Kim, Eunji;Lee, Woo-Kyun
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_1
    • /
    • pp.469-480
    • /
    • 2017
  • This study aims to understand the relationship between lineaments and epicenters in Geochang region, Gyungsangnam-do, South Korea. An instrumental observation of earthquakes has been started by Korea Meteorological Administration (KMA) since 1978 and there were 6 earthquakes with magnitude ranging 2 to 2.5 in Geochang region from 1978 to 2016. Lineaments were extracted from LANDSAT 8 satellite image and shaded relief map displayed in 3-dimension using Digital Elevation Model (DEM). Then, lineament density was statistically examined by hotspot analysis. Hexagonal grids were generated to perform the analysis because hexagonal pattern expresses lineaments with less discontinuity than square girds, and the size of the grid was selected to minimize a variance of lineament density. Since hotspot analysis measures the extent of clustering with Z score, Z scores computed with lineaments' frequency ($L_f$), length ($L_d$), and intersection ($L_t$) were used to find lineament clusters in the density map. Furthermore, the Z scores were extracted from the epicenters and examined to see the relevance of each density elements to epicenters. As a result, 15 among 18 densities,recorded as 3 elements in 6 epicenters, were higher than 1.65 which is 95% of the standard normal distribution. This indicates that epicenters coincide with high density area. Especially, $L_f$ and $L_t$ had a significant relationship with epicenter, being located in upper 95% of the standard normal distribution, except for one epicenter in $L_t$. This study can be used to identify potential seismic zones by improving the accuracy of expressing lineaments' spatial distribution and analyzing relationship between lineament density and epicenter. However, additional studies in wider study area with more epicenters are recommended to promote the results.

A Study on Relationships Between Environment, Organizational Structure, and Organizational Effectiveness of Public Health Centers in Korea (보건소의 환경, 조직구조와 조직유효성과의 관계)

  • Yun, Soon-Nyoung
    • Research in Community and Public Health Nursing
    • /
    • v.6 no.1
    • /
    • pp.5-33
    • /
    • 1995
  • The objective of the study are two-fold: one is to explore the relationship between environment, organizational structure, and organizational effectiveness of public health centers in Korea, and the other is to examine the validity of contingency theory for improving the organizational structure of public health care agencies, with special emphasis on public health nursing administration. Accordingly, the conceptual model of the study consisted of three different concepts: environment, organizational structure, and organizational effectiveness, which were built up from the contingency theory. Data were collected during the period from 1st of May through 30th of June, 1990. From the total of 249 health centers in the country, one hundred and five centers were sampled non proportionally, according to the geopolitical distribution. Out of 105, 73 health centers responded to mailed questionnaire. The health centers were the unit of the study, and a various statistical analysis techniques were used: Reliability analysis(Cronbach's Alpha) for 4 measurement tools; Shapiro-Wilk statistic for normality test of measured scores of 6 variables: ANOVA, Pearson Correlaion analysis, regressional analysis, and canonical correlation analysis for the test of the relationships and differences between the variables. The results were. as follows : 1. No significant differences between forma lization, decision-making authority and environmental complexity were found(F=1.383, P=.24 ; F=.801, P=.37). 2. Negative relationships between formalization and decision-making authority for both urban and rural health centers were found(r=-.470, P=.002 ; r=-.348, P=.46). 3. No significant relationship between formalization and job satisfaction for both urban and rural health centers were found (r=-.242, P=.132, r=-.060, P=.739). 4. Significant positive relationship between decision - making authority and job satisfaction were found in urban health centers (r=.504, P=.0009), but no such relationship was observed in rural health centers. Regression coefficient between them was statistically significant($\beta=1.535$, P=.0002), and accuracy of regression line was accepted (W=.975, P= .420). 5. No significant relationships among formalization and family planning services, maternal health services, and tuberculosis control services for both urban and rural health centers were found. 6. Among decision-making authority and family planning services, maternal health services, and tuberculosis control services, significant positive relationship was found between de cision-making authority and family planning services(r=.286, P=.73). 7. A significant difference was found in maternal health services by the type of health centers (F=5.13, P=.026) but no difference was found in tuberculosis control services by the type of health centers, formalization, and decision-making authority. 8. A significant positive relationships were found between family planning services and maternal health services and tuberculosis control services, and between maternal health services and tuberculosis control services (r=-.499, P=.001 ; r=.457, P=.004 ; r=.495, P=.002) in case of urban health centers. In case of rural health centers, relationships between family planning services and tuberculosis control services, and between maternal health services and tuberculosis control services were statistically significant (r=.534, P=.002 ; r=.389, P=.027). No significant relationship was found between family planning and maternal health services. 9. A significant positive canonical correlation was found between the group of independent variables consisted of formalization and de cision-making authority and the group of dependent variables consisted of family planning services, maternal health services and tuberculosis control services(Rc=.455, P=.02). In case of urban health centers, no significant canonical correlation was found between them, but significant canoncial correlation was found in rural health centers(Rc=.578, P=.069), 10. Relationships between job satisfaction and health care productivity was not found significant. Through these results, the assumed relationship between environment and organizational structure was not supported in health centers. Therefore, the relationship between the organizational effectiveness and the congruence between environment and organizational structure that contingency theory proposes to exist was not able to be tested. However decision-making authority was found as an important variable of organizational structure affecting family planning services and job satisfaction in urban health centers. Thus it was suggested that decentralized decision making among health professionals would be a valuable strategy for improvement of organizational effectiveness in public health centers. It is also recommended that further studies to test contingency theory would use variability and uncertainty to define environment of public health centers instead of complexity.

  • PDF

Empirical Estimation and Diurnal Patterns of Surface PM2.5 Concentration in Seoul Using GOCI AOD (GOCI AOD를 이용한 서울 지역 지상 PM2.5 농도의 경험적 추정 및 일 변동성 분석)

  • Kim, Sang-Min;Yoon, Jongmin;Moon, Kyung-Jung;Kim, Deok-Rae;Koo, Ja-Ho;Choi, Myungje;Kim, Kwang Nyun;Lee, Yun Gon
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.3
    • /
    • pp.451-463
    • /
    • 2018
  • The empirical/statistical models to estimate the ground Particulate Matter ($PM_{2.5}$) concentration from Geostationary Ocean Color Imager (GOCI) Aerosol Optical Depth (AOD) product were developed and analyzed for the period of 2015 in Seoul, South Korea. In the model construction of AOD-$PM_{2.5}$, two vertical correction methods using the planetary boundary layer height and the vertical ratio of aerosol, and humidity correction method using the hygroscopic growth factor were applied to respective models. The vertical correction for AOD and humidity correction for $PM_{2.5}$ concentration played an important role in improving accuracy of overall estimation. The multiple linear regression (MLR) models with additional meteorological factors (wind speed, visibility, and air temperature) affecting AOD and $PM_{2.5}$ relationships were constructed for the whole year and each season. As a result, determination coefficients of MLR models were significantly increased, compared to those of empirical models. In this study, we analyzed the seasonal, monthly and diurnal characteristics of AOD-$PM_{2.5}$model. when the MLR model is seasonally constructed, underestimation tendency in high $PM_{2.5}$ cases for the whole year were improved. The monthly and diurnal patterns of observed $PM_{2.5}$ and estimated $PM_{2.5}$ were similar. The results of this study, which estimates surface $PM_{2.5}$ concentration using geostationary satellite AOD, are expected to be applicable to the future GK-2A and GK-2B.

A Hybrid Forecasting Framework based on Case-based Reasoning and Artificial Neural Network (사례기반 추론기법과 인공신경망을 이용한 서비스 수요예측 프레임워크)

  • Hwang, Yousub
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.43-57
    • /
    • 2012
  • To enhance the competitive advantage in a constantly changing business environment, an enterprise management must make the right decision in many business activities based on both internal and external information. Thus, providing accurate information plays a prominent role in management's decision making. Intuitively, historical data can provide a feasible estimate through the forecasting models. Therefore, if the service department can estimate the service quantity for the next period, the service department can then effectively control the inventory of service related resources such as human, parts, and other facilities. In addition, the production department can make load map for improving its product quality. Therefore, obtaining an accurate service forecast most likely appears to be critical to manufacturing companies. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average simulation. However, these methods are only efficient for data with are seasonal or cyclical. If the data are influenced by the special characteristics of product, they are not feasible. In our research, we propose a forecasting framework that predicts service demand of manufacturing organization by combining Case-based reasoning (CBR) and leveraging an unsupervised artificial neural network based clustering analysis (i.e., Self-Organizing Maps; SOM). We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the service forecasting domain. Our proposed approach has several appealing features : (1) We applied CBR and SOM in a new forecasting domain such as service demand forecasting. (2) We proposed our combined approach between CBR and SOM in order to overcome limitations of traditional statistical forecasting methods and We have developed a service forecasting tool based on the proposed approach using an unsupervised artificial neural network and Case-based reasoning. In this research, we conducted an empirical study on a real digital TV manufacturer (i.e., Company A). In addition, we have empirically evaluated the proposed approach and tool using real sales and service related data from digital TV manufacturer. In our empirical experiments, we intend to explore the performance of our proposed service forecasting framework when compared to the performances predicted by other two service forecasting methods; one is traditional CBR based forecasting model and the other is the existing service forecasting model used by Company A. We ran each service forecasting 144 times; each time, input data were randomly sampled for each service forecasting framework. To evaluate accuracy of forecasting results, we used Mean Absolute Percentage Error (MAPE) as primary performance measure in our experiments. We conducted one-way ANOVA test with the 144 measurements of MAPE for three different service forecasting approaches. For example, the F-ratio of MAPE for three different service forecasting approaches is 67.25 and the p-value is 0.000. This means that the difference between the MAPE of the three different service forecasting approaches is significant at the level of 0.000. Since there is a significant difference among the different service forecasting approaches, we conducted Tukey's HSD post hoc test to determine exactly which means of MAPE are significantly different from which other ones. In terms of MAPE, Tukey's HSD post hoc test grouped the three different service forecasting approaches into three different subsets in the following order: our proposed approach > traditional CBR-based service forecasting approach > the existing forecasting approach used by Company A. Consequently, our empirical experiments show that our proposed approach outperformed the traditional CBR based forecasting model and the existing service forecasting model used by Company A. The rest of this paper is organized as follows. Section 2 provides some research background information such as summary of CBR and SOM. Section 3 presents a hybrid service forecasting framework based on Case-based Reasoning and Self-Organizing Maps, while the empirical evaluation results are summarized in Section 4. Conclusion and future research directions are finally discussed in Section 5.

Inexpensive Visual Motion Data Glove for Human-Computer Interface Via Hand Gesture Recognition (손 동작 인식을 통한 인간 - 컴퓨터 인터페이스용 저가형 비주얼 모션 데이터 글러브)

  • Han, Young-Mo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.341-346
    • /
    • 2009
  • The motion data glove is a representative human-computer interaction tool that inputs human hand gestures to computers by measuring their motions. The motion data glove is essential equipment used for new computer technologiesincluding home automation, virtual reality, biometrics, motion capture. For its popular usage, this paper attempts to develop an inexpensive visual.type motion data glove that can be used without any special equipment. The proposed approach has the special feature; it can be developed as a low-cost one becauseof not using high-cost motion-sensing fibers that were used in the conventional approaches. That makes its easy production and popular use possible. This approach adopts a visual method that is obtained by improving conventional optic motion capture technology, instead of mechanical method using motion-sensing fibers. Compared to conventional visual methods, the proposed method has the following advantages and originalities Firstly, conventional visual methods use many cameras and equipments to reconstruct 3D pose with eliminating occlusions But the proposed method adopts a mono vision approachthat makes simple and low cost equipments possible. Secondly, conventional mono vision methods have difficulty in reconstructing 3D pose of occluded parts in images because they have weak points about occlusions. But the proposed approach can reconstruct occluded parts in images by using originally designed thin-bar-shaped optic indicators. Thirdly, many cases of conventional methods use nonlinear numerical computation image analysis algorithm, so they have inconvenience about their initialization and computation times. But the proposed method improves these inconveniences by using a closed-form image analysis algorithm that is obtained from original formulation. Fourthly, many cases of conventional closed-form algorithms use approximations in their formulations processes, so they have disadvantages of low accuracy and confined applications due to singularities. But the proposed method improves these disadvantages by original formulation techniques where a closed-form algorithm is derived by using exponential-form twist coordinates, instead of using approximations or local parameterizations such as Euler angels.

Simulation Approach for the Tracing the Marine Pollution Using Multi-Remote Sensing Data (다중 원격탐사 자료를 활용한 해양 오염 추적 모의 실험 방안에 대한 연구)

  • Kim, Keunyong;Kim, Euihyun;Choi, Jun Myoung;Shin, Jisun;Kim, Wonkook;Lee, Kwang-Jae;Son, Young Baek;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_2
    • /
    • pp.249-261
    • /
    • 2020
  • Coastal monitoring using multiple platforms/sensors is a very important tools for accurately understanding the changes in offshore marine environment and disaster with high temporal and spatial resolutions. However, integrated observation studies using multiple platforms and sensors are insufficient, and none of them have been evaluated for efficiency and limitation of convergence. In this study, we aimed to suggest an integrated observation method with multi-remote sensing platform and sensors, and to diagnose the utility and limitation. Integrated in situ surveys were conducted using Rhodamine WT fluorescent dye to simulate various marine disasters. In September 2019, the distribution and movement of RWT dye patches were detected using satellite (Kompsat-2/3/3A, Landsat-8 OLI, Sentinel-3 OLCI and GOCI), unmanned aircraft (Mavic 2 pro and Inspire 2), and manned aircraft platforms after injecting fluorescent dye into the waters of the South Sea-Yeosu Sea. The initial patch size of the RWT dye was 2,600 ㎡ and spread to 62,000 ㎡ about 138 minutes later. The RWT patches gradually moved southwestward from the point where they were first released,similar to the pattern of tidal current flowing southwest as the tides gradually decreased. Unmanned Aerial Vehicles (UAVs) image showed highest resolution in terms of spatial and time resolution, but the coverage area was the narrowest. In the case of satellite images, the coverage area was wide, but there were some limitations compared to other platforms in terms of operability due to the long cycle of revisiting. For Sentinel-3 OLCI and GOCI, the spectral resolution and signal-to-noise ratio (SNR) were the highest, but small fluorescent dye detection was limited in terms of spatial resolution. In the case of hyperspectral sensor mounted on manned aircraft, the spectral resolution was the highest, but this was also somewhat limited in terms of operability. From this simulation approach, multi-platform integrated observation was able to confirm that time,space and spectral resolution could be significantly improved. In the future, if this study results are linked to coastal numerical models, it will be possible to predict the transport and diffusion of contaminants, and it is expected that it can contribute to improving model accuracy by using them as input and verification data of the numerical models.