• Title/Summary/Keyword: A* algorithm

Search Result 54,221, Processing Time 0.095 seconds

An Implementation of Dynamic Gesture Recognizer Based on WPS and Data Glove (WPS와 장갑 장치 기반의 동적 제스처 인식기의 구현)

  • Kim, Jung-Hyun;Roh, Yong-Wan;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.561-568
    • /
    • 2006
  • WPS(Wearable Personal Station) for next generation PC can define as a core terminal of 'Ubiquitous Computing' that include information processing and network function and overcome spatial limitation in acquisition of new information. As a way to acquire significant dynamic gesture data of user from haptic devices, traditional gesture recognizer based on desktop-PC using wire communication module has several restrictions such as conditionality on space, complexity between transmission mediums(cable elements), limitation of motion and incommodiousness on use. Accordingly, in this paper, in order to overcome these problems, we implement hand gesture recognition system using fuzzy algorithm and neural network for Post PC(the embedded-ubiquitous environment using blue-tooth module and WPS). Also, we propose most efficient and reasonable hand gesture recognition interface for Post PC through evaluation and analysis of performance about each gesture recognition system. The proposed gesture recognition system consists of three modules: 1) gesture input module that processes motion of dynamic hand to input data 2) Relational Database Management System(hereafter, RDBMS) module to segment significant gestures from input data and 3) 2 each different recognition modulo: fuzzy max-min and neural network recognition module to recognize significant gesture of continuous / dynamic gestures. Experimental result shows the average recognition rate of 98.8% in fuzzy min-nin module and 96.7% in neural network recognition module about significantly dynamic gestures.

Prelaunch Study of Validation for the Geostationary Ocean Color Imager (GOCI) (정지궤도 해색탑재체(GOCI) 자료 검정을 위한 사전연구)

  • Ryu, Joo-Hyung;Moon, Jeong-Eon;Son, Young-Baek;Cho, Seong-Ick;Min, Jee-Eun;Yang, Chan-Su;Ahn, Yu-Hwan;Shim, Jae-Seol
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.2
    • /
    • pp.251-262
    • /
    • 2010
  • In order to provide quantitative control of the standard products of Geostationary Ocean Color Imager (GOCI), on-board radiometric correction, atmospheric correction, and bio-optical algorithm are obtained continuously by comprehensive and consistent calibration and validation procedures. The calibration/validation for radiometric, atmospheric, and bio-optical data of GOCI uses temperature, salinity, ocean optics, fluorescence, and turbidity data sets from buoy and platform systems, and periodic oceanic environmental data. For calibration and validation of GOCI, we compared radiometric data between in-situ measurement and HyperSAS data installed in the Ieodo ocean research station, and between HyperSAS and SeaWiFS radiance. HyperSAS data were slightly different in in-situ radiance and irradiance, but they did not have spectral shift in absorption bands. Although all radiance bands measured between HyperSAS and SeaWiFS had an average 25% error, the 11% absolute error was relatively lower when atmospheric correction bands were omitted. This error is related to the SeaWiFS standard atmospheric correction process. We have to consider and improve this error rate for calibration and validation of GOCI. A reference target site around Dokdo Island was used for studying calibration and validation of GOCI. In-situ ocean- and bio-optical data were collected during August and October, 2009. Reflectance spectra around Dokdo Island showed optical characteristic of Case-1 Water. Absorption spectra of chlorophyll, suspended matter, and dissolved organic matter also showed their spectral characteristics. MODIS Aqua-derived chlorophyll-a concentration was well correlated with in-situ fluorometer value, which installed in Dokdo buoy. As we strive to solv the problems of radiometric, atmospheric, and bio-optical correction, it is important to be able to progress and improve the future quality of calibration and validation of GOCI.

Sea Water Type Classification Around the Ieodo Ocean Research Station Based On Satellite Optical Spectrum (인공위성 광학 스펙트럼 기반 이어도 해양과학기지 주변 해수의 수형 분류)

  • Lee, Ji-Hyun;Park, Kyung-Ae;Park, Jae-Jin;Lee, Ki-Tack;Byun, Do-Seung;Jeong, Kwang-Yeong;Oh, Hyun-Ju
    • Journal of the Korean earth science society
    • /
    • v.43 no.5
    • /
    • pp.591-603
    • /
    • 2022
  • The color and optical properties of seawater are determined by the interaction between dissolved organic and inorganic substances and plankton contained in it. The Ieodo - Ocean Research Institute (I-ORS), located in the East China Sea, is affected by the low salinity of the Yangtze River in the west and the Tsushima Warm Current in the south. Thus, it is a suitable site for analyzing the fluctuations in circulation and optical properties around the Korean Peninsula. In this study, seawater surrounding the I-ORS was classified according to its optical characteristics using the satellite remote reflectance observed with Moderate Resolution Imaging Spectroradiometer (MODIS)/Aqua and National Aeronautics and Space Administration (NASA) bio-Optical Marine Algorithm Dataset (NOMAD) from January 2016 to December 2020. Additionally, the variation characteristics of optical water types (OWTs) from different seasons were presented. A total of 59,532 satellite match-up data (d ≤ 10 km) collected from seawater surrounding the I-ORS were classified into 23 types using the spectral angle mapper. The OWTs appearing in relatively clear waters surrounding the I-ORS were observed to be greater than 50% of the total. The maximum OWTs frequency in summer and winter was opposite according to season. In particular, the OWTs corresponding to optically clear seawater were primarily present in the summer. However, the same OWTs were lower than overall 1% rate in winter. Considering the OWTs fluctuations in the East China Sea, the I-ORS is inferred to be located in the transition zone of seawater. This study contributes in understanding the optical characteristics of seawater and improving the accuracy of satellite ocean color variables.

Effect of Difference in Irrigation Amount on Growth and Yield of Tomato Plant in Long-term Cultivation of Hydroponics (장기 수경재배에서 급액량의 차이가 토마토 생육과 수량 특성에 미치는 영향)

  • Choi, Gyeong Lee;Lim, Mi Young;Kim, So Hui;Rho, Mi Young
    • Journal of Bio-Environment Control
    • /
    • v.31 no.4
    • /
    • pp.444-451
    • /
    • 2022
  • Recently, long-term cultivation is becoming more common with the increase in tomato hydroponics. In hydroponics, it is very important to supply an appropriate nutrient solution considering the nutrient and moisture requirements of crops, in terms of productivity, resource use, and environmental conservation. Since seasonal environmental changes appear severely in long-term cultivation, it is so critical to manage irrigation control considering these changes. Therefore, this study was carried out to investigate the effect of irrigation volume on growth and yield in tomato long-term cultivation using coir substrate. The irrigation volume was adjusted at 4 levels (high, medium high, medium low and low) by different irrigation frequency. Irrigation scheduling (frequency) was controlled based on solar radiation which measured by radiation sensor installed outside the greenhouse and performed whenever accumulated solar radiation energy reached set value. Set value of integrated solar radiation was changed by the growing season. The results revealed that the higher irrigation volume caused the higher drainage rate, which could prevent the EC of drainage from rising excessively. As the cultivation period elapsed, the EC of the drainage increased. And the lower irrigation volume supplied, the more the increase in EC of the drainage. Plant length was shorter in the low irrigation volume treatment compared to the other treatments. But irrigation volume did not affect the number of nodes and fruit clusters. The number of fruit settings was not significantly affected by the irrigation volume in general, but high irrigation volume significantly decreased fruit setting and yield of the 12-15th cluster developed during low temperature period. Blossom-end rot occurred early with a high incidence rate in the low irrigation volume treatment group. The highest weight fruits was obtained from the high irrigation treatment group, while the medium high treatment group had the highest total yield. As a result of the experiment, it could be confirmed the effect of irrigation amount on the nutrient and moisture stabilization in the root zone and yield, in addition to the importance of proper irrigation control when cultivating tomato plants hydroponically using coir substrate. Therefore, it is necessary to continue the research on this topic, as it is judged that the precise irrigation control algorithm based on root zone-information applied to the integrated environmental control system, will contribute to the improvement of crop productivity as well as the development of hydroponics control techniques.

The Accuracy Evaluation of Digital Elevation Models for Forest Areas Produced Under Different Filtering Conditions of Airborne LiDAR Raw Data (항공 LiDAR 원자료 필터링 조건에 따른 산림지역 수치표고모형 정확도 평가)

  • Cho, Seungwan;Choi, Hyung Tae;Park, Joowon
    • Journal of agriculture & life science
    • /
    • v.50 no.3
    • /
    • pp.1-11
    • /
    • 2016
  • With increasing interest, there have been studies on LiDAR(Light Detection And Ranging)-based DEM(Digital Elevation Model) to acquire three dimensional topographic information. For producing LiDAR DEM with better accuracy, Filtering process is crucial, where only surface reflected LiDAR points are left to construct DEM while non-surface reflected LiDAR points need to be removed from the raw LiDAR data. In particular, the changes of input values for filtering algorithm-constructing parameters are supposed to produce different products. Therefore, this study is aimed to contribute to better understanding the effects of the changes of the levels of GroundFilter Algrothm's Mean parameter(GFmn) embedded in FUSION software on the accuracy of the LiDAR DEM products, using LiDAR data collected for Hwacheon, Yangju, Gyeongsan and Jangheung watershed experimental area. The effect of GFmn level changes on the products' accuracy is estimated by measuring and comparing the residuals between the elevations at the same locations of a field and different GFmn level-produced LiDAR DEM sample points. In order to test whether there are any differences among the five GFmn levels; 1, 3, 5, 7 and 9, One-way ANOVA is conducted. In result of One-way ANOVA test, it is found that the change in GFmn level significantly affects the accuracy (F-value: 4.915, p<0.01). After finding significance of the GFmn level effect, Tukey HSD test is also conducted as a Post hoc test for grouping levels by the significant differences. In result, GFmn levels are divided into two subsets ('7, 5, 9, 3' vs. '1'). From the observation of the residuals of each individual level, it is possible to say that LiDAR DEM is generated most accurately when GFmn is given as 7. Through this study, the most desirable parameter value can be suggested to produce filtered LiDAR DEM data which can provide the most accurate elevation information.

Data-centric XAI-driven Data Imputation of Molecular Structure and QSAR Model for Toxicity Prediction of 3D Printing Chemicals (3D 프린팅 소재 화학물질의 독성 예측을 위한 Data-centric XAI 기반 분자 구조 Data Imputation과 QSAR 모델 개발)

  • ChanHyeok Jeong;SangYoun Kim;SungKu Heo;Shahzeb Tariq;MinHyeok Shin;ChangKyoo Yoo
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.523-541
    • /
    • 2023
  • As accessibility to 3D printers increases, there is a growing frequency of exposure to chemicals associated with 3D printing. However, research on the toxicity and harmfulness of chemicals generated by 3D printing is insufficient, and the performance of toxicity prediction using in silico techniques is limited due to missing molecular structure data. In this study, quantitative structure-activity relationship (QSAR) model based on data-centric AI approach was developed to predict the toxicity of new 3D printing materials by imputing missing values in molecular descriptors. First, MissForest algorithm was utilized to impute missing values in molecular descriptors of hazardous 3D printing materials. Then, based on four different machine learning models (decision tree, random forest, XGBoost, SVM), a machine learning (ML)-based QSAR model was developed to predict the bioconcentration factor (Log BCF), octanol-air partition coefficient (Log Koa), and partition coefficient (Log P). Furthermore, the reliability of the data-centric QSAR model was validated through the Tree-SHAP (SHapley Additive exPlanations) method, which is one of explainable artificial intelligence (XAI) techniques. The proposed imputation method based on the MissForest enlarged approximately 2.5 times more molecular structure data compared to the existing data. Based on the imputed dataset of molecular descriptor, the developed data-centric QSAR model achieved approximately 73%, 76% and 92% of prediction performance for Log BCF, Log Koa, and Log P, respectively. Lastly, Tree-SHAP analysis demonstrated that the data-centric-based QSAR model achieved high prediction performance for toxicity information by identifying key molecular descriptors highly correlated with toxicity indices. Therefore, the proposed QSAR model based on the data-centric XAI approach can be extended to predict the toxicity of potential pollutants in emerging printing chemicals, chemical process, semiconductor or display process.

CT-Derived Deep Learning-Based Quantification of Body Composition Associated with Disease Severity in Chronic Obstructive Pulmonary Disease (CT 기반 딥러닝을 이용한 만성 폐쇄성 폐질환의 체성분 정량화와 질병 중증도)

  • Jae Eun Song;So Hyeon Bak;Myoung-Nam Lim;Eun Ju Lee;Yoon Ki Cha;Hyun Jung Yoon;Woo Jin Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.84 no.5
    • /
    • pp.1123-1133
    • /
    • 2023
  • Purpose Our study aimed to evaluate the association between automated quantified body composition on CT and pulmonary function or quantitative lung features in patients with chronic obstructive pulmonary disease (COPD). Materials and Methods A total of 290 patients with COPD were enrolled in this study. The volume of muscle and subcutaneous fat, area of muscle and subcutaneous fat at T12, and bone attenuation at T12 were obtained from chest CT using a deep learning-based body segmentation algorithm. Parametric response mapping-derived emphysema (PRMemph), PRM-derived functional small airway disease (PRMfSAD), and airway wall thickness (AWT)-Pi10 were quantitatively assessed. The association between body composition and outcomes was evaluated using Pearson's correlation analysis. Results The volume and area of muscle and subcutaneous fat were negatively associated with PRMemph and PRMfSAD (p < 0.05). Bone density at T12 was negatively associated with PRMemph (r = -0.1828, p = 0.002). The volume and area of subcutaneous fat and bone density at T12 were positively correlated with AWT-Pi10 (r = 0.1287, p = 0.030; r = 0.1668, p = 0.005; r = 0.1279, p = 0.031). However, muscle volume was negatively correlated with the AWT-Pi10 (r = -0.1966, p = 0.001). Muscle volume was significantly associated with pulmonary function (p < 0.001). Conclusion Body composition, automatically assessed using chest CT, is associated with the phenotype and severity of COPD.

A study on evaluation of the image with washed-out artifact after applying scatter limitation correction algorithm in PET/CT exam (PET/CT 검사에서 냉소 인공물 발생 시 산란 제한 보정 알고리즘 적용에 따른 영상 평가)

  • Ko, Hyun-Soo;Ryu, Jae-kwang
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.1
    • /
    • pp.55-66
    • /
    • 2018
  • Purpose In PET/CT exam, washed-out artifact could occur due to severe motion of the patient and high specific activity, it results in lowering not only qualitative reading but also quantitative analysis. Scatter limitation correction by GE is an algorism to correct washed-out artifact and recover the images in PET scan. The purpose of this study is to measure the threshold of specific activity which can recovers to original uptake values on the image shown with washed-out artifact from phantom experiment and to compare the quantitative analysis of the clinical patient's data before and after correction. Materials and Methods PET and CT images were acquired in having no misalignment(D0) and in 1, 2, 3, 4 cm distance of misalignment(D1, D2, D3, D4) respectively, with 20 steps of each specific activity from 20 to 20,000 kBq/ml on $^{68}Ge$ cylinder phantom. Also, we measured the distance of misalignment of foley catheter line between CT and PET images, the specific activity which makes washed-out artifact, $SUV_{mean}$ of muscle in artifact slice and $SUV_{max}$ of lesion in artifact slice and $SUV_{max}$ of the other lesion out of artifact slice before and after correction respectively from 34 patients who underwent $^{18}F-FDG$ Fusion Whole Body PET/CT exam. SPSS 21 was used to analyze the difference in the SUV between before and after scatter limitation correction by paired t-test. Results In phantom experiment, $SUV_{mean}$ of $^{68}Ge$ cylinder decreased as specific activity of $^{18}F$ increased. $SUV_{mean}$ more and more decreased as the distance of misalignment between CT and PET more increased. On the other hand, the effect of correction increased as the distance more increased. From phantom experiments, there was no washed-out artifact below 50 kBq/ml and $SUV_{mean}$ was same from origin. On D0 and D1, $SUV_{mean}$ recovered to origin(0.95) below 120 kBq/ml when applying scatter limitation correction. On D2 and D3, $SUV_{mean}$ recovered to origin below 100 kBq/ml. On D4, $SUV_{mean}$ recovered to origin below 80 kBq/ml. From 34 clinical patient's data, the average distance of misalignment was 2.02 cm and the average specific activity which makes washed-out artifact was 490.15 kBq/ml. The average $SUV_{mean}$ of muscles and the average $SUV_{max}$ of lesions in artifact slice before and after the correction show a significant difference according to a paired t-test respectively(t=-13.805, p=0.000)(t=-2.851, p=0.012), but the average $SUV_{max}$ of lesions out of artifact slice show a no significant difference (t=-1.173, p=0.250). Conclusion Scatter limitation correction algorism by GE PET/CT scanner helps to correct washed-out artifact from motion of a patient or high specific activity and to recover the PET images. When we read the image occurred with washed-out artifact by measuring the distance of misalignment between CT and PET image, specific activity after applying scatter limitation algorism, we can analyze the images more accurately without repeating scan.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.