• Title/Summary/Keyword: derivative value

Search Result 298, Processing Time 0.031 seconds

A Multi-Compartment Secret Sharing Method (다중 컴파트먼트 비밀공유 기법)

  • Cheolhoon Choi;Minsoo Ryu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.2
    • /
    • pp.34-40
    • /
    • 2024
  • Secret sharing is a cryptographic technique that involves dividing a secret or a piece of sensitive information into multiple shares or parts, which can significantly increase the confidentiality of a secret. There has been a lot of research on secret sharing for different contexts or situations. Tassa's conjunctive secret sharing method employs polynomial derivatives to facilitate hierarchical secret sharing. However, the use of derivatives introduces several limitations in hierarchical secret sharing. Firstly, only a single group of participants can be created at each level due to the shares being generated from a sole derivative. Secondly, the method can only reconstruct a secret through conjunction, thereby restricting the specification of arbitrary secret reconstruction conditions. Thirdly, Birkhoff interpolation is required, adding complexity compared to the more accessible Lagrange interpolation used in polynomial-based secret sharing. This paper introduces the multi-compartment secret sharing method as a generalization of the conjunctive hierarchical secret sharing. Our proposed method first encrypts a secret using external groups' shares and then generates internal shares for each group by embedding the encrypted secret value in a polynomial. While the polynomial can be reconstructed with the internal shares, the polynomial just provides the encrypted secret, requiring external shares for decryption. This approach enables the creation of multiple participant groups at a single level. It supports the implementation of arbitrary secret reconstruction conditions, as well as conjunction. Furthermore, the use of polynomials allows the application of Lagrange interpolation.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

A Study on Developing a VKOSPI Forecasting Model via GARCH Class Models for Intelligent Volatility Trading Systems (지능형 변동성트레이딩시스템개발을 위한 GARCH 모형을 통한 VKOSPI 예측모형 개발에 관한 연구)

  • Kim, Sun-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.19-32
    • /
    • 2010
  • Volatility plays a central role in both academic and practical applications, especially in pricing financial derivative products and trading volatility strategies. This study presents a novel mechanism based on generalized autoregressive conditional heteroskedasticity (GARCH) models that is able to enhance the performance of intelligent volatility trading systems by predicting Korean stock market volatility more accurately. In particular, we embedded the concept of the volatility asymmetry documented widely in the literature into our model. The newly developed Korean stock market volatility index of KOSPI 200, VKOSPI, is used as a volatility proxy. It is the price of a linear portfolio of the KOSPI 200 index options and measures the effect of the expectations of dealers and option traders on stock market volatility for 30 calendar days. The KOSPI 200 index options market started in 1997 and has become the most actively traded market in the world. Its trading volume is more than 10 million contracts a day and records the highest of all the stock index option markets. Therefore, analyzing the VKOSPI has great importance in understanding volatility inherent in option prices and can afford some trading ideas for futures and option dealers. Use of the VKOSPI as volatility proxy avoids statistical estimation problems associated with other measures of volatility since the VKOSPI is model-free expected volatility of market participants calculated directly from the transacted option prices. This study estimates the symmetric and asymmetric GARCH models for the KOSPI 200 index from January 2003 to December 2006 by the maximum likelihood procedure. Asymmetric GARCH models include GJR-GARCH model of Glosten, Jagannathan and Runke, exponential GARCH model of Nelson and power autoregressive conditional heteroskedasticity (ARCH) of Ding, Granger and Engle. Symmetric GARCH model indicates basic GARCH (1, 1). Tomorrow's forecasted value and change direction of stock market volatility are obtained by recursive GARCH specifications from January 2007 to December 2009 and are compared with the VKOSPI. Empirical results indicate that negative unanticipated returns increase volatility more than positive return shocks of equal magnitude decrease volatility, indicating the existence of volatility asymmetry in the Korean stock market. The point value and change direction of tomorrow VKOSPI are estimated and forecasted by GARCH models. Volatility trading system is developed using the forecasted change direction of the VKOSPI, that is, if tomorrow VKOSPI is expected to rise, a long straddle or strangle position is established. A short straddle or strangle position is taken if VKOSPI is expected to fall tomorrow. Total profit is calculated as the cumulative sum of the VKOSPI percentage change. If forecasted direction is correct, the absolute value of the VKOSPI percentage changes is added to trading profit. It is subtracted from the trading profit if forecasted direction is not correct. For the in-sample period, the power ARCH model best fits in a statistical metric, Mean Squared Prediction Error (MSPE), and the exponential GARCH model shows the highest Mean Correct Prediction (MCP). The power ARCH model best fits also for the out-of-sample period and provides the highest probability for the VKOSPI change direction tomorrow. Generally, the power ARCH model shows the best fit for the VKOSPI. All the GARCH models provide trading profits for volatility trading system and the exponential GARCH model shows the best performance, annual profit of 197.56%, during the in-sample period. The GARCH models present trading profits during the out-of-sample period except for the exponential GARCH model. During the out-of-sample period, the power ARCH model shows the largest annual trading profit of 38%. The volatility clustering and asymmetry found in this research are the reflection of volatility non-linearity. This further suggests that combining the asymmetric GARCH models and artificial neural networks can significantly enhance the performance of the suggested volatility trading system, since artificial neural networks have been shown to effectively model nonlinear relationships.

Study on the Relationship between Biliary Secretion and Cyclic Nucleotides (담즙분비와 Cyclic nucleotides간의 상호관계에 관한 연구)

  • Lee, H.W.;Kim, W.J.;Hong, S.S.;Cho, S.J.;Hong, S.U.;Lim, C.K.
    • The Korean Journal of Pharmacology
    • /
    • v.18 no.1
    • /
    • pp.43-54
    • /
    • 1982
  • Bile formation is a complex process comprised of three separate physiologic mechanism operating at two anatomical sites. At present time, it was known that at least two processes are responsible for total canalicular secretion at the bile canaliculus. One of the processes is bile salt-dependent secretion (BSDS) hypothesis that the active transport of bile salts from plasma to bile provided a primary stimulus for bile formation: the osmotic effect of actively transported bile acid was responsible for the movement of water and ions into bile. The other process is bile salt-independent secretion (ESIS), which is unrelated to bile salt secretion at the canaliculus and which may involve the active transport of sodium. The third process for bile formation involves the biliary ductal epithelium. Secretin-stimulated bile characteristically contained bicarbonate in high concentration. Therefor, it was suggested that secretin stimulated water and bicarbonate secretion from the biliary ductules. One the other hand, it was found that a large amounts of cAMP was present in canine bile but no apparent relationship between bile salt secretion and cAMP content in dog bile. However, bile flow studies in human have demonstrated that secretin and glucagon increase bile cAMP secretion as does secretin in baboons. Secretin increases baboon bile duct mucosal cAMP levels in addition to bile CAMP levels suggesting that in that species secretin-stimulated bile flow may be cAMP mediated. It has been postulated that glucagon and theophylline which increase the bile salt-independent secretion in dogs might act through an increased in liver cAMP content. In a few studies, the possible role of cAMP on bile formation has teen tested by administration of an exogenous derivative of cAMP, dibutyryl cAMP. In the rat, DB cAMP did not modify bile flow, but injection of DB cAMP in the dog promoted an increase in the bile salt-independent secretion. Because of these contradictory results, this study was carried out to examine the relationship between cyclic nucleotides and bile flow due to various bile salts as well as secretin or theophylline. Experiments were performed in rabbits with anesthesia produced by the injection of seconal(30 mg/kg). Rabbits had the cystic duct ligated and the proximal end of the divided common duct cannulated with an appropriately sized polyethylene catheter. A similar catheter was placed into the inferior vena cava for administration of drugs. Bile was collected for determination of cyclic nucleotides and total cholate in 15 min. intervals for a few hours. The results are summerized as followings. 1) Administrations of taurocholic acid or chenodeoxycholic acid increased significantly the concentrations of cAMP and cGMP in bile of rabbits. 2) Concentration of cAMP in bile during the continuous infusion of ursodeoxycholic acid, was remarkedly increased in accordance with the increase of bile flow, while on the contrary concentration of cGMP in bile was decreased significantly. 3) Dehydrocholic acid and deoxycholic acid significantly increased bile flow, total cholate output and cyclic nucleotides in bile. 4) Only cAMP concentration in bile was significantly increased from control value by secretin, while theophylline increased cAMP as well as cGMP in rabbit bile. 5) In addition, the administration of secretin to taurocholic acid-stimulated bile flow increased cAMP while theophylline produced the increases of cAMP and cGMP in bile. 6) The administration of insulin to taurocholic acid-stimulated bile flow decreased cAMP concentration, while on the contrary cGMP was remarkedly increased in rabbit bile.

  • PDF

An Examination of Knowledge Sourcing Strategies Effects on Corporate Performance in Small Enterprises (소규모 기업에 있어서 지식소싱 전략이 기업성과에 미치는 영향 고찰)

  • Choi, Byoung-Gu
    • Asia pacific journal of information systems
    • /
    • v.18 no.4
    • /
    • pp.57-81
    • /
    • 2008
  • Knowledge is an essential strategic weapon for sustaining competitive advantage and is the key determinant for organizational growth. When knowledge is shared and disseminated throughout the organization, it increases an organization's value by providing the ability to respond to new and unusual situations. The growing importance of knowledge as a critical resource has forced executives to pay attention to their organizational knowledge. Organizations are increasingly undertaking knowledge management initiatives and making significant investments. Knowledge sourcing is considered as the first important step in effective knowledge management. Most firms continue to make an effort to realize the benefits of knowledge management by using various knowledge sources effectively. Appropriate knowledge sourcing strategies enable organizations to create, acquire, and access knowledge in a timely manner by reducing search and transfer costs, which result in better firm performance. In response, the knowledge management literature has devoted substantial attention to the analysis of knowledge sourcing strategies. Many studies have categorized knowledge sourcing strategies into intemal- and external-oriented. Internal-oriented sourcing strategy attempts to increase firm performance by integrating knowledge within the boundary of the firm. On the contrary, external-oriented strategy attempts to bring knowledge in from outside sources via either acquisition or imitation, and then to transfer that knowledge across to the organization. However, the extant literature on knowledge sourcing strategies focuses primarily on large organizations. Although many studies have clearly highlighted major differences between large and small firms and the need to adopt different strategies for different firm sizes, scant attention has been given to analyzing how knowledge sourcing strategies affect firm performance in small firms and what are the differences between small and large firms in the patterns of knowledge sourcing strategies adoption. This study attempts to advance the current literature by examining the impact of knowledge sourcing strategies on small firm performance from a holistic perspective. By drawing on knowledge based theory from organization science and complementarity theory from the economics literature, this paper is motivated by the following questions: (1) what are the adoption patterns of different knowledge sourcing strategies in small firms (i,e., what sourcing strategies should be adopted and which sourcing strategies work well together in small firms)?; and (2) what are the performance implications of these adoption patterns? In order to answer the questions, this study developed three hypotheses. First hypothesis based on knowledge based theory is that internal-oriented knowledge sourcing is positively associated with small firm performance. Second hypothesis developed on the basis of knowledge based theory is that external-oriented knowledge sourcing is positively associated with small firm performance. The third one based on complementarity theory is that pursuing both internal- and external-oriented knowledge sourcing simultaneously is negatively or less positively associated with small firm performance. As a sampling frame, 700 firms were identified from the Annual Corporation Report in Korea. Survey questionnaires were mailed to owners or executives who were most erudite about the firm s knowledge sourcing strategies and performance. A total of 188 companies replied, yielding a response rate of 26.8%. Due to incomplete data, 12 responses were eliminated, leaving 176 responses for the final analysis. Since all independent variables were measured using continuous variables, supermodularity function was used to test the hypotheses based on the cross partial derivative of payoff function. The results indicated no significant impact of internal-oriented sourcing strategies while positive impact of external-oriented sourcing strategy on small firm performance. This intriguing result could be explained on the basis of various resource and capital constraints of small firms. Small firms typically have restricted financial and human resources. They do not have enough assets to always develop knowledge internally. Another possible explanation is competency traps or core rigidities. Building up a knowledge base based on internal knowledge creates core competences, but at the same time, excessive internal focused knowledge exploration leads to behaviors blind to other knowledge. Interestingly, this study found that Internal- and external-oriented knowledge sourcing strategies had a substitutive relationship, which was inconsistent with previous studies that suggested complementary relationship between them. This result might be explained using organizational identification theory. Internal organizational members may perceive external knowledge as a threat, and tend to ignore knowledge from external sources because they prefer to maintain their own knowledge, legitimacy, and homogeneous attitudes. Therefore, integrating knowledge from internal and external sources might not be effective, resulting in failure of improvements of firm performance. Another possible explanation is small firms resource and capital constraints and lack of management expertise and absorptive capacity. Although the integration of different knowledge sources is critical, high levels of knowledge sourcing in many areas are quite expensive and so are often unrealistic for small enterprises. This study provides several implications for research as well as practice. First this study extends the existing knowledge by examining the substitutability (and complementarity) of knowledge sourcing strategies. Most prior studies have tended to investigate the independent effects of these strategies on performance without considering their combined impacts. Furthermore, this study tests complementarity based on the productivity approach that has been considered as a definitive test method for complementarity. Second, this study sheds new light on knowledge management research by identifying the relationship between knowledge sourcing strategies and small firm performance. Most current literature has insisted complementary relationship between knowledge sourcing strategies on the basis of data from large firms. Contrary to the conventional wisdom, this study identifies substitutive relationship between knowledge sourcing strategies using data from small firms. Third, implications for practice highlight that managers of small firms should focus on knowledge sourcing from external-oriented strategies. Moreover, adoption of both sourcing strategies simultaneousiy impedes small firm performance.

Effect of Tuberculin Skin Test on Ex-vivo Interferon-gamma Assay for Latent Tuberculosis Infection (투베르쿨린 검사가 결핵에 대한 체외 IFN-γ 검사 결과에 미치는 영향)

  • Lee, Jung Yeon;Choi, Hee Jin;Cho, Sang-Nae;Park, I-Nae;Oh, Yeon-Mok;Lee, Sang Do;Kim, Woo Sung;Kim, Dong Soon;Kim, Won Dong;Shim, Tae Sun
    • Tuberculosis and Respiratory Diseases
    • /
    • v.59 no.4
    • /
    • pp.406-412
    • /
    • 2005
  • Background : Recently, two commercialized whole-blood assays, $QuantiFERON^{(R)}-TB$ Gold (QFT) and T $SPOT-TB^{(R)}$ (SPOT), which measure the $IFN-{\gamma}$ released in the whole blood after being incubation with mycobacterial antigens, were approved for the diagnosis of a latent tuberculosis infection (LTBI). However, there is data on whether or not the previously used PPD skin tests (TST) have any influence on the diagnostic ability of these ex-vivo $IFN-{\gamma}$ assays. Methods : Forty-six 15 year-old students who did not appear to be infected with Mycobacterium tuberculosis were enrolled in this study. The peripheral blood was collected and used for two $IFN-{\gamma}$ assays. The $IFN-{\gamma}$ assays and TST were performed at the baseline ($1^{st}$). The TST was repeated two months later ($2^{nd}$), and the $IFN-{\gamma}$ assays were repeated two ($2^{nd}$) and four months ($3^{rd}$) later only in those subjects who had negative results at the baseline in both the $IFN-{\gamma}$ assays and TST. An induration size > 10 mm was considered to be positive in the TST. Results : The mean TST value was $3.1{\pm}5.4mm$ (range: 0-20). Of the 46 subjects examined, 13 subjects (28.3%) showed positive results in the two-step TST. Nine (19.6%) were SPOT-positive and only one (2.2%) was QFT-positive. The $2^{nd}$ and $3^{rd}$ QFT were carried out in 23 and 25 all-negative subjects, respectively, and all showed negative results. The $2^{nd}$ SPOT was performed in 23 subjects and only one (4.3%) showed a weak-positive result. Conclusion : Even though there were some discrepancies in the results of the two ex-vivo $IFN-{\gamma}$ assays, it appears that their results were not influenced by a previous TST carried out in two or four months earlier.

Estimation and Mapping of Soil Organic Matter using Visible-Near Infrared Spectroscopy (분광학을 이용한 토양 유기물 추정 및 분포도 작성)

  • Choe, Eun-Young;Hong, Suk-Young;Kim, Yi-Hyun;Zhang, Yong-Seon
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.43 no.6
    • /
    • pp.968-974
    • /
    • 2010
  • We assessed the feasibility of discrete wavelet transform (DWT) applied for the spectral processing to enhance the estimation performance quality of soil organic matters using visible-near infrared spectra and mapped their distribution via block Kriging model. Continuum-removal and $1^{st}$ derivative transform as well as Haar and Daubechies DWT were used to enhance spectral variation in terms of soil organic matter contents and those spectra were put into the PLSR (Partial Least Squares Regression) model. Estimation results using raw reflectance and transformed spectra showed similar quality with $R^2$ > 0.6 and RPD> 1.5. These values mean the approximation prediction on soil organic matter contents. The poor performance of estimation using DWT spectra might be caused by coarser approximation of DWT which not enough to express spectral variation based on soil organic matter contents. The distribution maps of soil organic matter were drawn via a spatial information model, Kriging. Organic contents of soil samples made Gaussian distribution centered at around 20 g $kg^{-1}$ and the values in the map were distributed with similar patterns. The estimated organic matter contents had similar distribution to the measured values even though some parts of estimated value map showed slightly higher. If the estimation quality is improved more, estimation model and mapping using spectroscopy may be applied in global soil mapping, soil classification, and remote sensing data analysis as a rapid and cost-effective method.

The Aesthetics of Conviction in Novel and Film Mephisto (소설과 영화 속 '메피스토'의 사상성 미학)

  • Shin, Sa-Bin
    • Journal of Popular Narrative
    • /
    • v.25 no.1
    • /
    • pp.217-247
    • /
    • 2019
  • This research paper intends to examine the intertextuality of Klaus Mann's novel Mephisto (1936) and István Szabó's film Mephisto (1981) and how the derivative contents (i.e., film) accepted and improved the schematic aesthetics of conviction in original contents (i.e., novel). In general, the aesthetics of conviction is applied to criticize the state socialism of the artists of the Third Reich or the ideology of the artists of East Germany from a biased ethical perspective. Mephisto is also based on the aesthetics of conviction. Thus, it would be meaningful to examine the characteristic similarity and difference between Klaus Mann's real antagonist (i.e., Gustaf Gründgens) and fictional antagonist (i.e., Hendrik Höfgen) from a historical critical perspective. In this process, an aesthetic distance between the real and fictional antagonists would be secured through the internal criticism in terms of intertextuality. In this respect, the film aesthetics of István Szabó are deemed to overcome the schematic limit of the original novel. The conviction in both the novel and film of Mephisto pertains to the belief and stance of a person who compromised with the state socialism of Nazi Germany, i.e., succumbed to the irresistible history. Klaus Mann denounced Mephisto's character Höfgen (i.e., Gründgens in reality) as an "Mephisto with evil spirits" from the perspective of exile literature. For such denunciation, Klaus Mann used various means such as satire, caricature, sarcasm, parody and irony. However, his novel is devoid of introspection and "utopianism", and thus could be considered to allow personal rights to be disregarded by the freedom of art. On the contrary, István Szabó employed the two different types of evil (evil of Mephisto and evil of Faust) from a dualistic perspective (instead of a dichotomous perspective of good and evil) by expressing the character of Höfgen like both Mephisto and Hamlet (i.e., "Faust with both good and evil spirits). However, Szabó did not present the mixed character of "Mephisto and Hamlet (Faust)" only as an object of pity. Rather, Szabó called for social responsibility by showing a much more tragic end. As such, the novel Mephisto is more like the biography of an individual, and the film Mephisto is more like the biography of a generation. The aesthetics of conviction of Mephisto appears to overcome biased historical and textual perspectives through the irony of intertextuality between the novel and the film. Even if history is an irresistible "fate" to an individual, human dignity cannot be denied because it is the "value of life". The issue of conviction is not only limited to the times of Nazi Germany. It can also be raised with the ideology of the modern and contemporary history of Korea. History is so deeply rooted that it should not be criticized merely from a dichotomous perspective. When it comes to the relationship between history and individual life, a neutral point of view is required. Hopefully, this research paper will provide readers with a significant opportunity for finding out their "inner Mephisto" and "inner Hamlet."