• Title/Summary/Keyword: random sum

Search Result 252, Processing Time 0.03 seconds

BETTI NUMBERS OF GAUSSIAN FIELDS

  • Park, Changbom;Pranav, Pratyush;Chingangbam, Pravabati;Van De Weygaert, Rien;Jones, Bernard;Vegter, Gert;Kim, Inkang;Hidding, Johan;Hellwing, Wojciech A.
    • Journal of The Korean Astronomical Society
    • /
    • v.46 no.3
    • /
    • pp.125-131
    • /
    • 2013
  • We present the relation between the genus in cosmology and the Betti numbers for excursion sets of three- and two-dimensional smooth Gaussian random fields, and numerically investigate the Betti numbers as a function of threshold level. Betti numbers are topological invariants of figures that can be used to distinguish topological spaces. In the case of the excursion sets of a three-dimensional field there are three possibly non-zero Betti numbers; ${\beta}_0$ is the number of connected regions, ${\beta}_1$ is the number of circular holes (i.e., complement of solid tori), and ${\beta}_2$ is the number of three-dimensional voids (i.e., complement of three-dimensional excursion regions). Their sum with alternating signs is the genus of the surface of excursion regions. It is found that each Betti number has a dominant contribution to the genus in a specific threshold range. ${\beta}_0$ dominates the high-threshold part of the genus curve measuring the abundance of high density regions (clusters). ${\beta}_1$ dominates the genus near the median thresholds which measures the topology of negatively curved iso-density surfaces, and ${\beta}_2$ corresponds to the low-threshold part measuring the void abundance. We average the Betti number curves (the Betti numbers as a function of the threshold level) over many realizations of Gaussian fields and find that both the amplitude and shape of the Betti number curves depend on the slope of the power spectrum n in such a way that their shape becomes broader and their amplitude drops less steeply than the genus as n decreases. This behaviour contrasts with the fact that the shape of the genus curve is fixed for all Gaussian fields regardless of the power spectrum. Even though the Gaussian Betti number curves should be calculated for each given power spectrum, we propose to use the Betti numbers for better specification of the topology of large scale structures in the universe.

Robust Design Method for Complex Stochastic Inventory Model

  • Hwang, In-Keuk;Park, Dong-Jin
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1999.04a
    • /
    • pp.426-426
    • /
    • 1999
  • ;There are many sources of uncertainty in a typical production and inventory system. There is uncertainty as to how many items customers will demand during the next day, week, month, or year. There is uncertainty about delivery times of the product. Uncertainty exacts a toll from management in a variety of ways. A spurt in a demand or a delay in production may lead to stockouts, with the potential for lost revenue and customer dissatisfaction. Firms typically hold inventory to provide protection against uncertainty. A cushion of inventory on hand allows management to face unexpected demands or delays in delivery with a reduced chance of incurring a stockout. The proposed strategies are used for the design of a probabilistic inventory system. In the traditional approach to the design of an inventory system, the goal is to find the best setting of various inventory control policy parameters such as the re-order level, review period, order quantity, etc. which would minimize the total inventory cost. The goals of the analysis need to be defined, so that robustness becomes an important design criterion. Moreover, one has to conceptualize and identify appropriate noise variables. There are two main goals for the inventory policy design. One is to minimize the average inventory cost and the stockouts. The other is to the variability for the average inventory cost and the stockouts The total average inventory cost is the sum of three components: the ordering cost, the holding cost, and the shortage costs. The shortage costs include the cost of the lost sales, cost of loss of goodwill, cost of customer dissatisfaction, etc. The noise factors for this design problem are identified to be: the mean demand rate and the mean lead time. Both the demand and the lead time are assumed to be normal random variables. Thus robustness for this inventory system is interpreted as insensitivity of the average inventory cost and the stockout to uncontrollable fluctuations in the mean demand rate and mean lead time. To make this inventory system for robustness, the concept of utility theory will be used. Utility theory is an analytical method for making a decision concerning an action to take, given a set of multiple criteria upon which the decision is to be based. Utility theory is appropriate for design having different scale such as demand rate and lead time since utility theory represents different scale across decision making attributes with zero to one ranks, higher preference modeled with a higher rank. Using utility theory, three design strategies, such as distance strategy, response strategy, and priority-based strategy. for the robust inventory system will be developed.loped.

  • PDF

The Local Myocardial Perfusion Rates of Right Atrial Cardioplegia in Hearts with Coronary Arterial Obstruction (관상동맥 협착을 동반한 심장에서 심근보호액 우심방 관류법의 심근 국소관류량)

  • Lee, Jae-Won;Seo, Gyeong-Pil
    • Journal of Chest Surgery
    • /
    • v.25 no.1
    • /
    • pp.1-16
    • /
    • 1992
  • The quantitatively measured local myocardial perfusion rates with microspheres are used as an objective indicator of even distribution of cardioplegic solution, and the efficacy of the retrograde right atrial route of cardioplegia is evaluated in hearts with various levels of coronary arterial obstruction. After initial antegrade cardioplegia under the median sternotomy and aortic cannulation, 60 hearts from anesthetized New Zealand white rabbits are divided in random order as normal group [ligated left main coronary artery ; MA, MR] and diagonal group [ligated proximal diagonal artery ; LA, LR]. Half of each group [N=10] are perfused with antegrade cardioplegia[A] under the pressure of 100 cmH2O and the other half with retrograde right atrial route[R] under the pressure of 60 cmH2O[St. Thomas cardioplegic solution mixed with measured amount of microspheres]. The myocardium is subdivided into segments as A[atria], RV[right ventricle]. S[septum], LV[normally perfused left ventricular free wall], ROI[ischemic myocardium of left ventricular free wall]. LV and RQI are further divided into N[subendocardium] and P[subepicardium]. The resulting local myocardial perfusion rates and N /P of each group are compared with Wilcoxon rank sum test. The weight of the hearts is 5.94$\pm$0.66g, and there are no statistically significant dif-ferences[p>0.05, ANOVA] between six compared group. The mean flow rate[F: ml /g / min] of MR group is comparable with MA group[p>0.05], but in N and L group, there are significantly depressed F with right atrial route of cardioplegia, which means elevated perfusion resistance with this route. In spite of no significant differences in delivered doses of microsphere[DEL] between compared groups[p>0.05, ANOVA], there are significantly depressed REC and NF in hearts with right atrial cardioplegia which suggests increased requirement of cardioplegic solution with this route. The interventricular septum shows poor perfusion with right atrial route of cardioplegia without obstruction of supplying coronary arteries. But, with obstruction of coronary artery supplying septum as in M group, the flow rate is superior with right atrial route of infusion. The left ventricular free wall perfusion rates of every RQI with R route are superior to that of A route[p<0.05]. But, in LV segments, there are unfavorable effects of right atrial cardioplegia in L group, although the subendocardial perfusion is well maintained in N group. The LV free wall of left main group shows depressed perfusion rates with antegrade route as compared with RQI segments of diagonal group. But, by contraries, there are increased perfusion rates and superior N /P ratio with retrograde right atrial route. It implies more effective perfusion with right atrial route of cardioplegia in more proximal coronary arterial obstruction[i.e., M group as compared with L group]. As a conclusion, all region of ischemia have superior perfusion rates with right atrial car-dioplegia as compared with antegrade route, and especially excellent results can be obtained in hearts with more proximal obstruction of coronary arteries which would otherwise result in more severe ischemic damage. But, the depressed perfusion rates of the segments with normal coronary artery in hearts with coronary arterial obstruction may be a problem of concern with right atrial cardioplegia and needs solution.

  • PDF

A Prefetching and Memory Management Policy for Personal Solid State Drives (개인용 SSD를 위한 선반입 및 메모리 관리 정책)

  • Baek, Sung-Hoon
    • The KIPS Transactions:PartA
    • /
    • v.19A no.1
    • /
    • pp.35-44
    • /
    • 2012
  • Traditional technologies that are used to improve the performance of hard disk drives show many negative cases if they are applied to solid state drives (SSD). Access time and block sequence in hard disk drives that consist of mechanical components are very important performance factors. Meanwhile, SSD provides superior random read performance that is not affected by block address sequence due to the characteristics of flash memory. Practically, it is recommended to disable prefetching if a SSD is installed in a personal computer. However, this paper presents a combinational method of a prefetching scheme and a memory management that consider the internal structure of SSD and the characteristics of NAND flash memory. It is important that SSD must concurrently operate multiple flash memory chips. The I/O unit size of NAND flash memory tends to increase and it exceeded the block size of operating systems. Hence, the proposed prefetching scheme performs in an operating unit of SSD. To complement a weak point of the prefetching scheme, the proposed memory management scheme adaptively evicts uselessly prefetched data to maximize the sum of cache hit rate and prefetch hit rate. We implemented the proposed schemes as a Linux kernel module and evaluated them using a commercial SSD. The schemes improved the I/O performance up to 26% in a given experiment.

Size of Prostatitis Symptoms Using Prostatitis Symptom Index(PSI): The Effect of Prostatitis Symptoms on Quality of Life (전립선염 증상지수를 이용한 전립선염 증상의 규모와 삶의 질에 미치는 영향)

  • Byun, Seok-Soo;Kang, Dae-Hee;Yoo, Keun-Young;Park, Sue-Kyung;Kwak, Cheol;Jo, Moon-Ki;Lee, Chong-Wook;Kim, Hyeon-Hoe
    • Journal of Preventive Medicine and Public Health
    • /
    • v.33 no.4
    • /
    • pp.449-458
    • /
    • 2000
  • Objectives : To determine the prevalence of prostatitis symptoms in the general population by questionnaire survey and to measure the effect of prostatitis symptoms on quality of life(QOL). Materials & Methods : A cross sectional community-based epidemiologic study was performed on 2,034 men, living in the Seoul metropolitan area using stepwise random sampling. Out of 2,034 interviewees, 1,356 men who were older than 40 and provided sufficient information were selected for this study. The questionnaires were completed by well trained interviewers. Contents of the questionnaires included demographic data, the Prostatitis Symptom Index(PSI), the International Prostate Symptom Score(IPSS), a general health questionnaire section and a sexual health questionnaire section. The PSI was composed of a sum of the scores from three questions about dysuria, penile pain and perineal pain and it ranged 0 to 12. Incidence of prostatitis symptoms was defined by a score of 4 or more and the reference group was defined as consisting of those with a score of 3 or less. The rate of incidence of prostatitis symptoms was assessed according to age and the difference of QOL between the prostatitis symptoms group and the reference group. Results : The overall positive rate of prostatitis symptoms measured by the PSI, in men older than 40, living in the Seoul metropolitan area, was 4.5%(61/1,356), adjusted to 4.8% by the relative proportion of this age group in the general population of the Seoul metropolitan area as compared to Korea and the World. The proportion of the group with prostatitis symptoms assessed by the PSI did not increase with age although the proportion of participants with moderate to severe lower urinary tract symptoms (LUTS) did increase with age. The group with prostatitis symptoms suffered from a much greater incidence of LUTS compared to the reference group (p<0.05). The QOL scores of the IPSS, and the general health and sexual health status of the group with prostatitis symptoms, were worse than those of the reference group.(p<0.05). Conclusions : The positive rate of prostatitis symptoms in men older than 40, living in the Seoul metropolitan area, was 4.8% and it didn't increase with age. The general QOL of the group with prostatitis symptoms was much worse than that of the reference group.

  • PDF

Evaluation of Diet Quality according to Nutrient Intake between Highly Educated, Married, Unemployed and Employed Women (고학력 기혼여성의 취업여부별 영양소 섭취로 본 식사의 질 평가)

  • Choi Ji-Hyun;Chung Young-Jin
    • Journal of Nutrition and Health
    • /
    • v.39 no.2
    • /
    • pp.160-170
    • /
    • 2006
  • This study was conducted to provide foundation data for making health care policy for married women by assessing the dietary intake between highly educated married, employed and unemployed women. It is a direct interview, cross-sectional study with 24-hour recall method for one day. In selecting the subjects for this study, married, unemployed women were selected from a certain area (Daedeok Science Town) in Daejeon where there are high rates of highly educated women, and the married, employed women were selected from the teaching profession in order to avoid confounding due to including a variety of jobs. According to the Korean Standard Classification of Occupations, teaching is the representational occupation of highly educated, married women. Then, to prevent confounding due to age, we selected the subjects out of each age group at the same rate through random sampling. Women who had not graduated college, worked only part-time, or had no current spouse were excluded. As a result, 486 highly-educated, married, unemployed (250) and employed (236) women were used for analyzing data. The unemployed women consumed a higher amount of fat, cholesterol, sodium, vitamin C and folic acid while the employed women consumed a higher amount of iron, vitamin $B_l$ and vitamin $B_2$. P/M/S ratio being 1/1.18/1.05 and 1/1.05/0.87, for the unemployed women and the employed women, respectively, unemployed respondents had a higher saturated fat intake than those of employed. It is in excess of the standard ratio (1/1/1) of the Korean RDA. At the same time, in unemployed respondents the percent of energy intake from fat (24.8%, 23.2%) and animal fat (12.4%, 11.4%) were higher than those of employed respondents. The mean daily nutrient intake of calcium, zinc, and iron for both groups of respondents were lower than the Korean RDA. Both groups had phosphorus as the highest nutrient and calcium as the lowest nutrient of INQ (Index of Nutritional Quality) while nutrients with the INQ being less than 1 were calcium and iron. To sum up, the following conclusions can be made: Nutrition education and guidance for reduction of the intake of fat, especially animal fat, are necessary for unemployed women. In addition, highly educated, married, unemployed and employed women should increase the consumption of foods rich in iron and calcium to prevent anemia and osteoporosis, while decreasing the intake of phosphorus to balance proportions of calcium and phosphorus.

Effect of Multimodal cues on Tactile Mental Imagery and Attitude-Purchase Intention Towards the Product (다중 감각 단서가 촉각적 심상과 제품에 대한 태도-구매 의사에 미치는 영향)

  • Lee, Yea Jin;Han, Kwanghee
    • Science of Emotion and Sensibility
    • /
    • v.24 no.3
    • /
    • pp.41-60
    • /
    • 2021
  • The purpose of this research was to determine whether multimodal cues in an online shopping environment could enhance tactile consumer mental imagery, purchase intentions, and attitudes towards an apparel product. One limitation of online retail is that consumers are unable to physically touch the items. However, as tactile information plays an important role in consumer decisions especially for apparel products, this study investigated the effects of multimodal cues on overcoming the lack of tactile stimuli. In experiment 1, to explore the product, the participants were randomly assigned to four conditions; picture only, video without sound, video with corresponding sound, and video with discordant sound; after which tactile mental imagery vividness, ease of imagination, attitude, and purchase intentions were measured. It was found that the video with discordant sound had the lowest average scores of all dependent variables. A within-participants design was used in experiment 2, in which all participants explored the same product in the four conditions in a random order. They were told that they were visiting four different brands on a price comparison web site. After the same variables as in experiment 1, including the need for touch, were measured, the repeated measures ANCOVA results revealed that compared to the other conditions, the video with the corresponding sound significantly enhanced tactile mental imagery vividness, attitude, and purchase intentions. However, the discordant condition had significantly lower attitudes and purchase intentions. The dual mediation analysis also revealed that the multimodal cue conditions significantly predicted attitudes and purchase intentions by sequentially mediating the imagery vividness and ease of imagination. In sum, vivid tactile mental imagery triggered using audio-visual stimuli could have a positive effect on consumer decision making by making it easier to imagine a situation where consumers could touch and use the product.

Real-Time Scheduling Scheme based on Reinforcement Learning Considering Minimizing Setup Cost (작업 준비비용 최소화를 고려한 강화학습 기반의 실시간 일정계획 수립기법)

  • Yoo, Woosik;Kim, Sungjae;Kim, Kwanho
    • The Journal of Society for e-Business Studies
    • /
    • v.25 no.2
    • /
    • pp.15-27
    • /
    • 2020
  • This study starts with the idea that the process of creating a Gantt Chart for schedule planning is similar to Tetris game with only a straight line. In Tetris games, the X axis is M machines and the Y axis is time. It is assumed that all types of orders can be worked without separation in all machines, but if the types of orders are different, setup cost will be incurred without delay. In this study, the game described above was named Gantris and the game environment was implemented. The AI-scheduling table through in-depth reinforcement learning compares the real-time scheduling table with the human-made game schedule. In the comparative study, the learning environment was studied in single order list learning environment and random order list learning environment. The two systems to be compared in this study are four machines (Machine)-two types of system (4M2T) and ten machines-six types of system (10M6T). As a performance indicator of the generated schedule, a weighted sum of setup cost, makespan and idle time in processing 100 orders were scheduled. As a result of the comparative study, in 4M2T system, regardless of the learning environment, the learned system generated schedule plan with better performance index than the experimenter. In the case of 10M6T system, the AI system generated a schedule of better performance indicators than the experimenter in a single learning environment, but showed a bad performance index than the experimenter in random learning environment. However, in comparing the number of job changes, the learning system showed better results than those of the 4M2T and 10M6T, showing excellent scheduling performance.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Multi-level Analysis of the Antecedents of Knowledge Transfer: Integration of Social Capital Theory and Social Network Theory (지식이전 선행요인에 관한 다차원 분석: 사회적 자본 이론과 사회연결망 이론의 결합)

  • Kang, Minhyung;Hau, Yong Sauk
    • Asia pacific journal of information systems
    • /
    • v.22 no.3
    • /
    • pp.75-97
    • /
    • 2012
  • Knowledge residing in the heads of employees has always been regarded as one of the most critical resources within a firm. However, many tries to facilitate knowledge transfer among employees has been unsuccessful because of the motivational and cognitive problems between the knowledge source and the recipient. Social capital, which is defined as "the sum of the actual and potential resources embedded within, available through, derived from the network of relationships possessed by an individual or social unit [Nahapiet and Ghoshal, 1998]," is suggested to resolve these motivational and cognitive problems of knowledge transfer. In Social capital theory, there are two research streams. One insists that social capital strengthens group solidarity and brings up cooperative behaviors among group members, such as voluntary help to colleagues. Therefore, social capital can motivate an expert to transfer his/her knowledge to a colleague in need without any direct reward. The other stream insists that social capital provides an access to various resources that the owner of social capital doesn't possess directly. In knowledge transfer context, an employee with social capital can access and learn much knowledge from his/her colleagues. Therefore, social capital provides benefits to both the knowledge source and the recipient in different ways. However, prior research on knowledge transfer and social capital is mostly limited to either of the research stream of social capital and covered only the knowledge source's or the knowledge recipient's perspective. Social network theory which focuses on the structural dimension of social capital provides clear explanation about the in-depth mechanisms of social capital's two different benefits. 'Strong tie' builds up identification, trust, and emotional attachment between the knowledge source and the recipient; therefore, it motivates the knowledge source to transfer his/her knowledge to the recipient. On the other hand, 'weak tie' easily expands to 'diverse' knowledge sources because it does not take much effort to manage. Therefore, the real value of 'weak tie' comes from the 'diverse network structure,' not the 'weak tie' itself. It implies that the two different perspectives on strength of ties can co-exist. For example, an extroverted employee can manage many 'strong' ties with 'various' colleagues. In this regards, the individual-level structure of one's relationships as well as the dyadic-level relationship should be considered together to provide a holistic view of social capital. In addition, interaction effect between individual-level characteristics and dyadic-level characteristics can be examined, too. Based on these arguments, this study has following research questions. (1) How does the social capital of the knowledge source and the recipient influence knowledge transfer respectively? (2) How does the strength of ties between the knowledge source and the recipient influence knowledge transfer? (3) How does the social capital of the knowledge source and the recipient influence the effect of the strength of ties between the knowledge source and the recipient on knowledge transfer? Based on Social capital theory and Social network theory, a multi-level research model is developed to consider both the individual-level social capital of the knowledge source and the recipient and the dyadic-level strength of relationship between the knowledge source and the recipient. 'Cross-classified random effect model,' one of the multi-level analysis methods, is adopted to analyze the survey responses from 337 R&D employees. The results of analysis provide several findings. First, among three dimensions of the knowledge source's social capital, network centrality (i.e., structural dimension) shows the significant direct effect on knowledge transfer. On the other hand, the knowledge recipient's network centrality is not influential. Instead, it strengthens the influence of the strength of ties between the knowledge source and the recipient on knowledge transfer. It means that the knowledge source's network centrality does not directly increase knowledge transfer. Instead, by providing access to various knowledge sources, the network centrality provides only the context where the strong tie between the knowledge source and the recipient leads to effective knowledge transfer. In short, network centrality has indirect effect on knowledge transfer from the knowledge recipient's perspective, while it has direct effect from the knowledge source's perspective. This is the most important contribution of this research. In addition, contrary to the research hypothesis, company tenure of the knowledge recipient negatively influences knowledge transfer. It means that experienced employees do not look for new knowledge and stick to their own knowledge. This is also an interesting result. One of the possible reasons is the hierarchical culture of Korea, such as a fear of losing face in front of subordinates. In a research methodology perspective, multi-level analysis adopted in this study seems to be very promising in management research area which has a multi-level data structure, such as employee-team-department-company. In addition, social network analysis is also a promising research approach with an exploding availability of online social network data.

  • PDF