• Title/Summary/Keyword: real experiments

Search Result 3,334, Processing Time 0.031 seconds

Prefetching based on the Type-Level Access Pattern in Object-Relational DBMSs (객체관계형 DBMS에서 타입수준 액세스 패턴을 이용한 선인출 전략)

  • Han, Wook-Shin;Moon, Yang-Sae;Whang, Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.529-544
    • /
    • 2001
  • Prefetching is an effective method to minimize the number of roundtrips between the client and the server in database management systems. In this paper we propose new notions of the type-level access pattern and the type-level access locality and developed an efficient prefetchin policy based on the notions. The type-level access patterns is a sequence of attributes that are referenced in accessing the objects: the type-level access locality a phenomenon that regular and repetitive type-level access patterns exist. Existing prefetching methods are based on object-level or page-level access patterns, which consist of object0ids of page-ids of the objects accessed. However, the drawback of these methods is that they work only when exactly the same objects or pages are accessed repeatedly. In contrast, even though the same objects are not accessed repeatedly, our technique effectively prefetches objects if the same attributes are referenced repeatedly, i,e of there is type-level access locality. Many navigational applications in Object-Relational Database Management System(ORDBMs) have type-level access locality. Therefore our technique can be employed in ORDBMs to effectively reduce the number of roundtrips thereby significantly enhancing the performance. We have conducted extensive experiments in a prototype ORDBMS to show the effectiveness of our algorithm. Experimental results using the 007 benchmark and a real GIS application show that our technique provides orders of magnitude improvements in the roundtrips and several factors of improvements in overall performance over on-demand fetching and context-based prefetching, which a state-of the art prefetching method. These results indicate that our approach significantly and is a practical method that can be implemented in commercial ORDMSs.

  • PDF

Intercomparison of Daegwallyeong Cloud Physics Observation System (CPOS) Products and the Visibility Calculation by the FSSP Size Distribution during 2006-2008 (대관령 구름물리관측시스템 산출물 평가 및 FSSP를 이용한 시정환산 시험연구)

  • Yang, Ha-Young;Jeong, Jin-Yim;Chang, Ki-Ho;Cha, Joo-Wan;Jung, Jae-Won;Kim, Yoo-Chul;Lee, Myoung-Joo;Bae, Jin-Young;Kang, Sun-Young;Kim, Kum-Lan;Choi, Young-Jean;Choi, Chee-Young
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.2
    • /
    • pp.65-73
    • /
    • 2010
  • To observe and analyze the characteristics of cloud and precipitation properties, the Cloud physics Observation System (CPOS) has been operated from December 2003 at Daegwallyeong ($37.4^{\circ}N$, $128.4^{\circ}E$, 842 m) in the Taebaek Mountains. The major instruments of CPOS are follows: Forward Scattering Spectrometer Probe (FSSP), Optical Particle Counter (OPC), Visibility Sensor (VS), PARSIVEL disdrometer, Microwave Radiometer (MWR), and Micro Rain Radar (MRR). The former four instruments (FSSP, OPC, visibility sensor, and PARSIVEL) are for the observation and analysis of characteristics of the ground cloud (fog) and precipitation, and the others are for the vertical cloud characteristics (http://weamod.metri.re.kr) in real time. For verification of CPOS products, the comparison between the instrumental products has been conducted: the qualitative size distributions of FSSP and OPC during the hygroscopic seeding experiments, the precipitable water vapors of MWR and radiosonde, and the rainfall rates of the PARSIVEL(or MRR) and rain gauge. Most of comparisons show a good agreement with the correlation coefficient more than 0.7. These reliable CPOS products will be useful for the cloud-related studies such as the cloud-aerosol indirect effect or cloud seeding. The visibility value is derived from the droplet size distribution of FSSP. The derived FSSP visibility shows the constant overestimation by 1.7 to 1.9 times compared with the values of two visibility sensors (SVS (Sentry Visibility Sensor) and PWD22 (Present Weather Detect 22)). We believe this bias is come from the limitation of the droplet size range ($2{\sim}47\;{\mu}m$) measured by FSSP. Further studies are needed after introducing new instruments with other ranges.

Estimation of Nondestructive Rice Leaf Nitrogen Content Using Ground Optical Sensors (지상광학센서를 이용한 비파괴 벼 엽 질소함량 추정)

  • Kim, Yi-Hyun;Hong, Suk-Young
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.40 no.6
    • /
    • pp.435-441
    • /
    • 2007
  • Ground-based optical sensing over the crop canopy provides information on the mass of plant body which reflects the light, as well as crop nitrogen content which is closely related to the greenness of plant leaves. This method has the merits of being non-destructive real-time based, and thus can be conveniently used for decision making on application of nitrogen fertilizers for crops standing in fields. In the present study relationships among leaf nitrogen content of rice canopy, crop growth status, and Normalized Difference Vegetation Index (NDVI) values were investigated. We measured Green normalized difference vegetation index($gNDVI=({\rho}0.80{\mu}m-{\rho}0.55{\mu}m)/({\rho}0.80{\mu}m+{\rho}0.55{\mu}m)$) and NDVI($({\rho}0.80{\mu}m-{\rho}0.68{\mu}m)/({\rho}0.80{\mu}m+{\rho}0.68{\mu}m)$) were measured by using two different active sensors (Greenseeker, NTech Inc. USA). The study was conducted in the years 2005-06 during the rice growing season at the experimental plots of National Institute of Agricultural Science and Technology located at Suwon, Korea. The experiments carried out with randomized complete block design with the application of four levels of nitrogen fertilizers (0, 70, 100, 130kg N/ha) and same amount of phosphorous and potassium content of the fertilizers. gNDVI and rNDVI increased as growth advanced and reached to maximum values at around early August, G(NDVI) were a decrease in values of observed with the crop maturation. gNDVI values and leaf nitrogen content were highly correlated at early July in 2005 and 2006. On the basis of this finding we attempted to estimate the leaf N contents using gNDVI data obtained in 2005 and 2006. The determination coefficients of the linear model by gNDVI in the years 2005 and 2006 were 0.88 and 0.94, respectively. The measured and estimated leaf N contents using gNDVI values showed good agreement ($R^2=0.86^{***}$). Results from this study show that gNDVI values represent a significant positive correlation with leaf N contents and can be used to estimate leaf N before the panicle formation stage. gNDVI appeared to be a very effective parameter to estimate leaf N content the rice canopy.

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

An Experimental Study on the Hydration Heat of Concrete Using Phosphate based Inorganic Salt (인산계 무기염을 이용한 콘크리트의 수화 발열 특성에 관한 실험적 연구)

  • Jeong, Seok-Man;Kim, Se-Hwan;Yang, Wan-Hee;Kim, Young-Sun;Ki, Jun-Do;Lee, Gun-Cheol
    • Journal of the Korea Institute of Building Construction
    • /
    • v.20 no.6
    • /
    • pp.489-495
    • /
    • 2020
  • Whereas the control of the hydration heat in mass concrete has been important as the concrete structures enlarge, many conventional strategies show some limitations in their effectiveness and practicality. Therefore, In this study, as a solution of controling the heat of hydration of mass concrete, a method to reduce the heat of hydration by controlling the hardening of cement was examined. The reduction of the hydration heat by the developed Phosphate Inorganic Salt was basically verified in the insulated boxes filled with binder paste or concrete mixture. That is, the effects of the Phosphate Inorganic Salt on the hydration heat, flow or slump, and compressive strength were analyzed in binary and ternary blended cement which is generally used for low heat. As a result, the internal maximum temperature rise induced by the hydration heat was decreased by 9.5~10.6% and 10.1~11.7% for binder paste and concrete mixed with the Phosphate Inorganic Salt, respectively. Besides, the delay of the time corresponding to the peak temperature was apparently observed, which is beneficial to the emission of the internal hydration heat in real structures. The Phosphate Inorganic Salt that was developed and verified by a series of the aforementioned experiments showed better performance than the existing ones in terms of the control of the hydration heat and other performance. It can be used for the purpose of hydration heat of mass concrete in the future.

An Outlier Detection Using Autoencoder for Ocean Observation Data (해양 이상 자료 탐지를 위한 오토인코더 활용 기법 최적화 연구)

  • Kim, Hyeon-Jae;Kim, Dong-Hoon;Lim, Chaewook;Shin, Yongtak;Lee, Sang-Chul;Choi, Youngjin;Woo, Seung-Buhm
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.33 no.6
    • /
    • pp.265-274
    • /
    • 2021
  • Outlier detection research in ocean data has traditionally been performed using statistical and distance-based machine learning algorithms. Recently, AI-based methods have received a lot of attention and so-called supervised learning methods that require classification information for data are mainly used. This supervised learning method requires a lot of time and costs because classification information (label) must be manually designated for all data required for learning. In this study, an autoencoder based on unsupervised learning was applied as an outlier detection to overcome this problem. For the experiment, two experiments were designed: one is univariate learning, in which only SST data was used among the observation data of Deokjeok Island and the other is multivariate learning, in which SST, air temperature, wind direction, wind speed, air pressure, and humidity were used. Period of data is 25 years from 1996 to 2020, and a pre-processing considering the characteristics of ocean data was applied to the data. An outlier detection of actual SST data was tried with a learned univariate and multivariate autoencoder. We tried to detect outliers in real SST data using trained univariate and multivariate autoencoders. To compare model performance, various outlier detection methods were applied to synthetic data with artificially inserted errors. As a result of quantitatively evaluating the performance of these methods, the multivariate/univariate accuracy was about 96%/91%, respectively, indicating that the multivariate autoencoder had better outlier detection performance. Outlier detection using an unsupervised learning-based autoencoder is expected to be used in various ways in that it can reduce subjective classification errors and cost and time required for data labeling.

A study on the effect of microgroove-fibronectin complex titanium plate on the expression of various cell behavior-related genes in human gingival fibroblasts (인간치은섬유아세포의 다양한 세포행동 관련 유전자발현에 마이크로그루브-파이브로넥틴 복합 티타늄표면이 미치는 영향에 대한 연구)

  • Hwang, Yu Jeong;Lee, Won Joong;Leesungbok, Richard;Lee, Suk Won
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.38 no.3
    • /
    • pp.150-161
    • /
    • 2022
  • Purpose: To determine the effects of the microgroove-fibronectin complex surface on the expression of various genes related to cellular activity in human gingival fibroblasts. Materials and Methods: Smooth titanium specimens (NE0), acid-treated titanium specimens (E0), microgroove and acid-treated titanium specimens (E60/10), fibronectin-fixed smooth titanium specimens (NE0FN), acid-treated and fibronectin-immobilized titanium specimens (E0FN), and microgroove and acid-treated titanium specimens immobilized with fibronectin (E60/10FN) were prepared. Real-time polymerase chain reaction experiments were conducted on 44 genes related to cell behavior of human gingival fibroblasts. Results: Adhesion and proliferation of human gingival fibroblast on microgroove-fibronectin complex titanium were activated through four types of signaling pathway. Integrin α5, Integrin β1, Integrin β3, Talin-2, which belong to the focal adhesion pathway, AKT1, AKT2, NF-κB, which belong to the PI3K-AKT signaling pathway, MEK2, ERK1, ERK2, which belong to the MAPK signaling pathway, and Cyclin D1, CDK4, CDK6 genes belonging to the cell cycle signaling pathway were upregulated on the microgroove-fibronectin complex titanium surface (E60/10FN). Conclusion: The microgroove-fibronectin complex titanium surface can up-regulate various genes involved in cell behavior.

Influence of identifiable victim effect on third-party's punishment and compensation judgments (인식 가능한 피해자 효과가 제3자의 처벌 및 보상 판단에 미치는 영향)

  • Choi, InBeom;Kim, ShinWoo;Li, Hyung-Chul O.
    • Korean Journal of Forensic Psychology
    • /
    • v.11 no.2
    • /
    • pp.135-153
    • /
    • 2020
  • Identifiable victim effect refers to the tendency of greater sympathy and helping behavior to identifiable victims than to abstract, unidentifiable ones. This research tested whether this tendency also affects third-party's punishment and compensation judgments in jury context for public's legal judgments. In addition, through the Identifiable victim effect in such legal judgment, we intended to explain the effect of 'the bill named for victim', putting the victim's real name and identity at the forefront, which is aimed at strengthening the punishment of related crimes by gaining public attention and support. To do so, we conducted experiments with hypothetical traffic accident scenarios that controlled legal components while manipulating victim's identifying information. In experiment 1, each participant read a scenario of an anonymous victim (unidentifiable condition) or a nonanonymous victim that included personal information such as name and age (identifiable condition) and made judgments on the degree of punishment and compensation. The results showed no effect of identifiability on third-party's punishment and compensation judgments, but moderation effect of BJW was obtained in the identifiable condition. That is, those with higher BJW showed greater tendency of punishment and compensation for identifiable victims. In Experiment 2, we compared an anonymous victim (unidentifiable condition) against a well-conducted victim (positive condition) and ill-conducted victim (negative condition) to test the effects of victim's characteristics on punishment for offender and compensation for victims. The results showed lower compensation for an ill-conducted victim than for an anonymous one. In addition, across all conditions except for negative condition, participants made punishment and compensation judgments higher than the average judicial precedents of 10-point presented in the rating scale. This research showed that victim's characteristics other than legal components affects third-party's legal decision making. Furthermore, we interpreted third-party's tendency to impose higher punishment and compensation with effect of 'the bill named for victim' and proposed social and legal discussion for and future research.

  • PDF

Ship Detection from SAR Images Using YOLO: Model Constructions and Accuracy Characteristics According to Polarization (YOLO를 이용한 SAR 영상의 선박 객체 탐지: 편파별 모델 구성과 정확도 특성 분석)

  • Yungyo Im;Youjeong Youn;Jonggu Kang;Seoyeon Kim;Yemin Jeong;Soyeon Choi;Youngmin Seo;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.997-1008
    • /
    • 2023
  • Ship detection at sea can be performed in various ways. In particular, satellites can provide wide-area surveillance, and Synthetic Aperture Radar (SAR) imagery can be utilized day and night and in all weather conditions. To propose an efficient ship detection method from SAR images, this study aimed to apply the You Only Look Once Version 5 (YOLOv5) model to Sentinel-1 images and to analyze the difference between individual vs. integrated models and the accuracy characteristics by polarization. YOLOv5s, which has fewer and lighter parameters, and YOLOv5x, which has more parameters but higher accuracy, were used for the performance tests (1) by dividing each polarization into HH, HV, VH, and VV, and (2) by using images from all polarizations. All four experiments showed very similar and high accuracy of 0.977 ≤ AP@0.5 ≤ 0.998. This result suggests that the polarization integration model using lightweight YOLO models can be the most effective in terms of real-time system deployment. 19,582 images were used in this experiment. However, if other SAR images,such as Capella and ICEYE, are included in addition to Sentinel-1 images, a more flexible and accurate model for ship detection can be built.

The Impact of the Internet Channel Introduction Depending on the Ownership of the Internet Channel (도입주체에 따른 인터넷경로의 도입효과)

  • Yoo, Weon-Sang
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.1
    • /
    • pp.37-46
    • /
    • 2009
  • The Census Bureau of the Department of Commerce announced in May 2008 that U.S. retail e-commerce sales for 2006 reached $ 107 billion, up from $ 87 billion in 2005 - an increase of 22 percent. From 2001 to 2006, retail e-sales increased at an average annual growth rate of 25.4 percent. The explosive growth of E-Commerce has caused profound changes in marketing channel relationships and structures in many industries. Despite the great potential implications for both academicians and practitioners, there still exists a great deal of uncertainty about the impact of the Internet channel introduction on distribution channel management. The purpose of this study is to investigate how the ownership of the new Internet channel affects the existing channel members and consumers. To explore the above research questions, this study conducts well-controlled mathematical experiments to isolate the impact of the Internet channel by comparing before and after the Internet channel entry. The model consists of a monopolist manufacturer selling its product through a channel system including one independent physical store before the entry of an Internet store. The addition of the Internet store to this channel system results in a mixed channel comprised of two different types of channels. The new Internet store can be launched by the independent physical store such as Bestbuy. In this case, the physical retailer coordinates the two types of stores to maximize the joint profits from the two stores. The Internet store also can be introduced by an independent Internet retailer such as Amazon. In this case, a retail level competition occurs between the two types of stores. Although the manufacturer sells only one product, consumers view each product-outlet pair as a unique offering. Thus, the introduction of the Internet channel provides two product offerings for consumers. The channel structures analyzed in this study are illustrated in Fig.1. It is assumed that the manufacturer plays as a Stackelberg leader maximizing its own profits with the foresight of the independent retailer's optimal responses as typically assumed in previous analytical channel studies. As a Stackelberg follower, the independent physical retailer or independent Internet retailer maximizes its own profits, conditional on the manufacturer's wholesale price. The price competition between two the independent retailers is assumed to be a Bertrand Nash game. For simplicity, the marginal cost is set at zero, as typically assumed in this type of study. In order to explore the research questions above, this study develops a game theoretic model that possesses the following three key characteristics. First, the model explicitly captures the fact that an Internet channel and a physical store exist in two independent dimensions (one in physical space and the other in cyber space). This enables this model to demonstrate that the effect of adding an Internet store is different from that of adding another physical store. Second, the model reflects the fact that consumers are heterogeneous in their preferences for using a physical store and for using an Internet channel. Third, the model captures the vertical strategic interactions between an upstream manufacturer and a downstream retailer, making it possible to analyze the channel structure issues discussed in this paper. Although numerous previous models capture this vertical dimension of marketing channels, none simultaneously incorporates the three characteristics reflected in this model. The analysis results are summarized in Table 1. When the new Internet channel is introduced by the existing physical retailer and the retailer coordinates both types of stores to maximize the joint profits from the both stores, retail prices increase due to a combination of the coordination of the retail prices and the wider market coverage. The quantity sold does not significantly increase despite the wider market coverage, because the excessively high retail prices alleviate the market coverage effect to a degree. Interestingly, the coordinated total retail profits are lower than the combined retail profits of two competing independent retailers. This implies that when a physical retailer opens an Internet channel, the retailers could be better off managing the two channels separately rather than coordinating them, unless they have the foresight of the manufacturer's pricing behavior. It is also found that the introduction of an Internet channel affects the power balance of the channel. The retail competition is strong when an independent Internet store joins a channel with an independent physical retailer. This implies that each retailer in this structure has weak channel power. Due to intense retail competition, the manufacturer uses its channel power to increase its wholesale price to extract more profits from the total channel profit. However, the retailers cannot increase retail prices accordingly because of the intense retail level competition, leading to lower channel power. In this case, consumer welfare increases due to the wider market coverage and lower retail prices caused by the retail competition. The model employed for this study is not designed to capture all the characteristics of the Internet channel. The theoretical model in this study can also be applied for any stores that are not geographically constrained such as TV home shopping or catalog sales via mail. The reasons the model in this study is names as "Internet" are as follows: first, the most representative example of the stores that are not geographically constrained is the Internet. Second, catalog sales usually determine the target markets using the pre-specified mailing lists. In this aspect, the model used in this study is closer to the Internet than catalog sales. However, it would be a desirable future research direction to mathematically and theoretically distinguish the core differences among the stores that are not geographically constrained. The model is simplified by a set of assumptions to obtain mathematical traceability. First, this study assumes the price is the only strategic tool for competition. In the real world, however, various marketing variables can be used for competition. Therefore, a more realistic model can be designed if a model incorporates other various marketing variables such as service levels or operation costs. Second, this study assumes the market with one monopoly manufacturer. Therefore, the results from this study should be carefully interpreted considering this limitation. Future research could extend this limitation by introducing manufacturer level competition. Finally, some of the results are drawn from the assumption that the monopoly manufacturer is the Stackelberg leader. Although this is a standard assumption among game theoretic studies of this kind, we could gain deeper understanding and generalize our findings beyond this assumption if the model is analyzed by different game rules.

  • PDF