• Title/Summary/Keyword: 수정해

Search Result 13,407, Processing Time 0.042 seconds

Determinants Affecting Organizational Open Source Software Switch and the Moderating Effects of Managers' Willingness to Secure SW Competitiveness (조직의 오픈소스 소프트웨어 전환에 영향을 미치는 요인과 관리자의 SW 경쟁력 확보의지의 조절효과)

  • Sanghyun Kim;Hyunsun Park
    • Information Systems Review
    • /
    • v.21 no.4
    • /
    • pp.99-123
    • /
    • 2019
  • The software industry is a high value-added industry in the knowledge information age, and its importance is growing as it not only plays a key role in knowledge creation and utilization, but also secures global competitiveness. Among various SW available in today's business environment, Open Source Software(OSS) is rapidly expanding its activity area by not only leading software development, but also integrating with new information technology. Therefore, the purpose of this research is to empirically examine and analyze the effect of factors on the switching behavior to OSS. To accomplish the study's purpose, we suggest the research model based on "Push-Pull-Mooring" framework. This study empirically examines the two categories of antecedents for switching behavior toward OSS. The survey was conducted to employees at various firms that already switched OSS. A total of 268 responses were collected and analyzed by using the structural equational modeling. The results of this study are as follows; first, continuous maintenance cost, vender dependency, functional indifference, and SW resource inefficiency are significantly related to switch to OSS. Second, network-oriented support, testability and strategic flexibility are significantly related to switch to OSS. Finally, the results show that willingness to secures SW competitiveness has a moderating effect on the relationships between push factors and pull factor with exception of improved knowledge, and switch to OSS. The results of this study will contribute to fields related to OSS both theoretically and practically.

The Association of Dual Job on Dental Hygienists' Job Satisfaction (치과위생사의 직무만족도와 동시일자리(부업)의 관련성)

  • Mi-Sook Yoon;Go-eun Kim;Han-A Cho
    • Journal of Korean Dental Hygiene Science
    • /
    • v.6 no.2
    • /
    • pp.51-64
    • /
    • 2023
  • Background: This study was conducted to determine the association with dual jobbing on dental hygienists' job satisfaction and to identify the factors that lead to dual jobs. Methods: This study was an online survey of 110 currently employed dental hygienists conducted during the month of May 2022. To determine job satisfaction, we used the 20-item Korea-Minnesota Satisfaction Questionnaire (K-MSQ). Survey questions related to dual job were adapted and supplemented from the dual job survey instrument for dental hygienists to identify intention to dual job and future intention to dual job. Descriptive statistics, independent t-test, ANOVA and Scheffe's post hoc analysis, and multiple logistic regression were performed. Results: The dual job rate and future dual job rate of the participants in this study were about 27% and 47%, respectively. The means for Intrinsic job satisfaction, Extrinsic job satisfaction, and job satisfaction were 3.44, 3.15, and 3.36, respectively. It was statistically significant that extrinsic job satisfaction increased with increasing position, and intrinsic job satisfaction, extrinsic job satisfaction, and job satisfaction increased with increasing salary. Those currently working dual jobs cited "self-actualization" as a reason for doing so, and those who intended to work dual jobs in the future cited "not being paid enough in their primary job" as a reason. We found that a one-unit increase in intrinsic job satisfaction and job satisfaction increases the odds of future intention to dual job by about 1.07 and 1.05 times, respectively (p<0.05). Conclusion: This study confirmed the influence of dental hygienists' job satisfaction on intention to dual job and future intention to dual job, and self-actualization was found to be the main factor. Therefore, the consideration of dual jobs in the future will affect the improvement of dental hygienists as professionals and the reduction of turnover through better working conditions.

Real data-based active sonar signal synthesis method (실데이터 기반 능동 소나 신호 합성 방법론)

  • Yunsu Kim;Juho Kim;Jongwon Seok;Jungpyo Hong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.1
    • /
    • pp.9-18
    • /
    • 2024
  • The importance of active sonar systems is emerging due to the quietness of underwater targets and the increase in ambient noise due to the increase in maritime traffic. However, the low signal-to-noise ratio of the echo signal due to multipath propagation of the signal, various clutter, ambient noise and reverberation makes it difficult to identify underwater targets using active sonar. Attempts have been made to apply data-based methods such as machine learning or deep learning to improve the performance of underwater target recognition systems, but it is difficult to collect enough data for training due to the nature of sonar datasets. Methods based on mathematical modeling have been mainly used to compensate for insufficient active sonar data. However, methodologies based on mathematical modeling have limitations in accurately simulating complex underwater phenomena. Therefore, in this paper, we propose a sonar signal synthesis method based on a deep neural network. In order to apply the neural network model to the field of sonar signal synthesis, the proposed method appropriately corrects the attention-based encoder and decoder to the sonar signal, which is the main module of the Tacotron model mainly used in the field of speech synthesis. It is possible to synthesize a signal more similar to the actual signal by training the proposed model using the dataset collected by arranging a simulated target in an actual marine environment. In order to verify the performance of the proposed method, Perceptual evaluation of audio quality test was conducted and within score difference -2.3 was shown compared to actual signal in a total of four different environments. These results prove that the active sonar signal generated by the proposed method approximates the actual signal.

One-probe P300 based concealed information test with machine learning (기계학습을 이용한 단일 관련자극 P300기반 숨김정보검사)

  • Hyuk Kim;Hyun-Taek Kim
    • Korean Journal of Cognitive Science
    • /
    • v.35 no.1
    • /
    • pp.49-95
    • /
    • 2024
  • Polygraph examination, statement validity analysis and P300-based concealed information test are major three examination tools, which are use to determine a person's truthfulness and credibility in criminal procedure. Although polygraph examination is most common in criminal procedure, but it has little admissibility of evidence due to the weakness of scientific basis. In 1990s to support the weakness of scientific basis about polygraph, Farwell and Donchin proposed the P300-based concealed information test technique. The P300-based concealed information test has two strong points. First, the P300-based concealed information test is easy to conduct with polygraph. Second, the P300-based concealed information test has plentiful scientific basis. Nevertheless, the utilization of P300-based concealed information test is infrequent, because of the quantity of probe stimulus. The probe stimulus contains closed information that is relevant to the crime or other investigated situation. In tradition P300-based concealed information test protocol, three or more probe stimuli are necessarily needed. But it is hard to acquire three or more probe stimuli, because most of the crime relevant information is opened in investigative situation. In addition, P300-based concealed information test uses oddball paradigm, and oddball paradigm makes imbalance between the number of probe and irrelevant stimulus. Thus, there is a possibility that the unbalanced number of probe and irrelevant stimulus caused systematic underestimation of P300 amplitude of irrelevant stimuli. To overcome the these two limitation of P300-based concealed information test, one-probe P300-based concealed information test protocol is explored with various machine learning algorithms. According to this study, parameters of the modified one-probe protocol are as follows. In the condition of female and male face stimuli, the duration of stimuli are encouraged 400ms, the repetition of stimuli are encouraged 60 times, the analysis method of P300 amplitude is encouraged peak to peak method, the cut-off of guilty condition is encouraged 90% and the cut-off of innocent condition is encouraged 30%. In the condition of two-syllable word stimulus, the duration of stimulus is encouraged 300ms, the repetition of stimulus is encouraged 60 times, the analysis method of P300 amplitude is encouraged peak to peak method, the cut-off of guilty condition is encouraged 90% and the cut-off of innocent condition is encouraged 30%. It was also conformed that the logistic regression (LR), linear discriminant analysis (LDA), K Neighbors (KNN) algorithms were probable methods for analysis of P300 amplitude. The one-probe P300-based concealed information test with machine learning protocol is helpful to increase utilization of P300-based concealed information test, and supports to determine a person's truthfulness and credibility with the polygraph examination in criminal procedure.

Dynamic Limit and Predatory Pricing Under Uncertainty (불확실성하(不確實性下)의 동태적(動態的) 진입제한(進入制限) 및 약탈가격(掠奪價格) 책정(策定))

  • Yoo, Yoon-ha
    • KDI Journal of Economic Policy
    • /
    • v.13 no.1
    • /
    • pp.151-166
    • /
    • 1991
  • In this paper, a simple game-theoretic entry deterrence model is developed that integrates both limit pricing and predatory pricing. While there have been extensive studies which have dealt with predation and limit pricing separately, no study so far has analyzed these closely related practices in a unified framework. Treating each practice as if it were an independent phenomenon is, of course, an analytical necessity to abstract from complex realities. However, welfare analysis based on such a model may give misleading policy implications. By analyzing limit and predatory pricing within a single framework, this paper attempts to shed some light on the effects of interactions between these two frequently cited tactics of entry deterrence. Another distinctive feature of the paper is that limit and predatory pricing emerge, in equilibrium, as rational, profit maximizing strategies in the model. Until recently, the only conclusion from formal analyses of predatory pricing was that predation is unlikely to take place if every economic agent is assumed to be rational. This conclusion rests upon the argument that predation is costly; that is, it inflicts more losses upon the predator than upon the rival producer, and, therefore, is unlikely to succeed in driving out the rival, who understands that the price cutting, if it ever takes place, must be temporary. Recently several attempts have been made to overcome this modelling difficulty by Kreps and Wilson, Milgram and Roberts, Benoit, Fudenberg and Tirole, and Roberts. With the exception of Roberts, however, these studies, though successful in preserving the rationality of players, still share one serious weakness in that they resort to ad hoc, external constraints in order to generate profit maximizing predation. The present paper uses a highly stylized model of Cournot duopoly and derives the equilibrium predatory strategy without invoking external constraints except the assumption of asymmetrically distributed information. The underlying intuition behind the model can be summarized as follows. Imagine a firm that is considering entry into a monopolist's market but is uncertain about the incumbent firm's cost structure. If the monopolist has low cost, the rival would rather not enter because it would be difficult to compete with an efficient, low-cost firm. If the monopolist has high costs, however, the rival will definitely enter the market because it can make positive profits. In this situation, if the incumbent firm unwittingly produces its monopoly output, the entrant can infer the nature of the monopolist's cost by observing the monopolist's price. Knowing this, the high cost monopolist increases its output level up to what would have been produced by a low cost firm in an effort to conceal its cost condition. This constitutes limit pricing. The same logic applies when there is a rival competitor in the market. Producing a high cost duopoly output is self-revealing and thus to be avoided. Therefore, the firm chooses to produce the low cost duopoly output, consequently inflicting losses to the entrant or rival producer, thus acting in a predatory manner. The policy implications of the analysis are rather mixed. Contrary to the widely accepted hypothesis that predation is, at best, a negative sum game, and thus, a strategy that is unlikely to be played from the outset, this paper concludes that predation can be real occurence by showing that it can arise as an effective profit maximizing strategy. This conclusion alone may imply that the government can play a role in increasing the consumer welfare, say, by banning predation or limit pricing. However, the problem is that it is rather difficult to ascribe any welfare losses to these kinds of entry deterring practices. This difficulty arises from the fact that if the same practices have been adopted by a low cost firm, they could not be called entry-deterring. Moreover, the high cost incumbent in the model is doing exactly what the low cost firm would have done to keep the market to itself. All in all, this paper suggests that a government injunction of limit and predatory pricing should be applied with great care, evaluating each case on its own basis. Hasty generalization may work to the detriment, rather than the enhancement of consumer welfare.

  • PDF

The 1998, 1999 Patterns of Care Study for Breast Irradiation after Mastectomy in Korea (1998, 1999년도 우리나라에서 시행된 근치적 유방 전절제술 후 방사선치료 현황 조사)

  • Keum,, Ki-Chang;Shim, Su-Jung;Lee, Ik-Jae;Park, Won;Lee, Sang-Wook;Shin, Hyun-Soo;Chung, Eun-Ji;Chie, Eui-Kyu;Kim, Il-Han;Oh, Do-Hoon;Ha, Sung-Whan;Lee, Hyung-Sik;Ahn, Sung-Ja
    • Radiation Oncology Journal
    • /
    • v.25 no.1
    • /
    • pp.7-15
    • /
    • 2007
  • [ $\underline{Purpose}$ ]: To determine the patterns of evaluation and treatment in patients with breast cancer after mastectomy and treated with radiotherapy. A nationwide study was performed with the goal of improving radiotherapy treatment. $\underline{Materials\;and\;Methods}$: A web- based database system for the Korean Patterns of Care Study (PCS) for 6 common cancers was developed. Randomly selected records of 286 eligible patients treated between 1998 and 1999 from 17 hospitals were reviewed. $\underline{Results}$: The ages of the study patients ranged from 20 to 80 years (median age 44 years). The pathologic T stage by the AJCC was T1 in 9.7% of the cases, T2 in 59.2% of the cases, T3 in 25.6% of the cases, and T4 in 5.3% of the cases. For analysis of nodal involvement, N0 was 7.3%, N1 was 14%, N2 was 38.8%, and N3 was 38.5% of the cases. The AJCC stage was stage I in 0.7% of the cases, stage IIa in 3.8% of the cases, stage IIb in 9.8% of the cases, stage IIIa in 43% of the cases, stage IIIb in 2.8% of the cases, and IIIc in 38.5% of the cases. There were various sequences of chemotherapy and radiotherapy after mastectomy. Mastectomy and chemotherapy followed by radiotherapy was the most commonly performed sequence in 47% of the cases. Mastectomy, chemotherapy, and radiotherapy followed by additional chemotherapy was performed in 35% of the cases, and neoadjuvant chemoradiotherapy was performed in 12.5% of the cases. The radiotherapy volume was chest wall only in 5.6% of the cases. The volume was chest wall and supraclavicular fossa (SCL) in 20.3% of the cases; chest wall, SCL and internal mammary lymph node (IMN) in 27.6% of the cases; chest wall, SCL and posterior axillary lymph node in 25.9% of the cases; chest wall, SCL, IMN, and posterior axillary lymph node in 19.9% of the cases. Two patients received IMN only. The method of chest wall irradiation was tangential field in 57.3% of the cases and electron beam in 42% of the cases. A bolus for the chest wall was used in 54.8% of the tangential field cases and 52.5% of the electron beam cases. The radiation dose to the chest wall was $45{\sim}59.4\;Gy$ (median 50.4 Gy), to the SCL was $45{\sim}59.4\;Gy$ (median 50.4 Gy), and to the PAB was $4.8{\sim}38.8\;Gy$, (median 9 Gy) $\underline{Conclusion}$: Different and various treatment methods were used for radiotherapy of the breast cancer patients after mastectomy in each hospital. Most of treatment methods varied in the irradiation of the chest wall. A separate analysis for the details of radiotherapy planning also needs to be followed and the outcome of treatment is needed in order to evaluate the different processes.

Immediate Reoperation for Failed Mitral Valve Repair (승모판막성형술 실패 직후에 시행한 재수술)

  • Baek, Man-Jong;Na, Chan-Young;Oh, Sam-Se;Kim, Woong-Han;Whang, Sung-Wook;Lee, Cheol;Chang, Yun-Hee;Jo, Won-Min;Kim, Jae-Hyun;Seo, Hong-Ju;Kim, Wook-Sung;Lee, Young-Tak;Park, Young-Kwan;Kim, Chong-Whan
    • Journal of Chest Surgery
    • /
    • v.36 no.12
    • /
    • pp.929-936
    • /
    • 2003
  • We analysed the surgical outcomes of immediate reoperations after mitral valve repair. Material and Method: Eighteen patients who underwent immediate reoperation for failed mitral valve repair from April 1995 through July 2001 were reviewed retrospectively. There were 13 female patients. The mitral valve disease was regurgitation (MR) in 12 patients, stenosis (MS) in 3, and mixed lesion in 3. The etiologies of the valve disease were rheumatic in 9 patients, degenerative in 8, and endocarditis in 1. The causes of reoperation was residual MR in 13 patients, residual MS in 4, and rupture of left ventricle in 1. Fourteen patients had rerepair for residual mitral lesions (77.8%) and four underwent replacement. Result: There was no early death. After mean follow-vp of 33 months, there was one late death. Echocardiography revealed no or grade 1 of MR (64.3%) in 9 patients and no or mild MS in 11 patients (78,6%). Reoperation was done in one patient. The cumulative survival and freedom from valve-related reoperation at 6 years were 94% and 90%, respectively. The cumulative freedom from recurrent MR and MS at 4 years were 56% and 44%, respectively. Conclusion: This study suggests that immediate reoperation for failed mitral valve repair offers good early and intermediate survival, and mitral valve rerepair can be successfully performed in most of patients. However, because mitral rerepair have high failure rate, especially in rheumatic valve disease, adequate selections of valvuloplasty technique and indication are important to reduce the failure rate of mitral rerepair.

Weight loss effects of Bariatric Surgery after nutrition education in extremely obese patients (고도비만환자에서 베리아트릭 수술 (Bariatric Surgery) 후 영양교육이 체중감량에 미치는 효과)

  • Jeong, Eun-Ha;Lee, Hong-Chan;Yim, Jung-Eun
    • Journal of Nutrition and Health
    • /
    • v.48 no.1
    • /
    • pp.30-45
    • /
    • 2015
  • Purpose: This study was planned to determine the characteristics of extremely obese patients during Bariatric surgery and to evaluate how the difference in the number of postsurgical personal nutritional educations they received affected the weight loss. Methods: This is a retrospective study on the basis of the medical records of extremely obese patients for 15 months after receiving gastric banding. A total of 60 people were selected as the study subjects and they were divided into the Less Educated Group and the More Educated Group according to the average number of personal nutritional educations they received. We investigated both groups to determine the general characteristic, health related lifestyle habits, obesity related complications and symptoms in possession, and eating habits before their surgery, the body composition measurement result, obesity determination indices at 1, 3, 6, 9, 12, and 15 months before and after their surgery, and the biochemical parameters at 6 months before and after their surgery. Results: Body fat and weight showed rapid reduction until 6 months after the surgery, but thereafter reduced slowly depending on the result of body composition measurement. Regarding body fat and weight, the More Educated Group, who received nutrition education more often, showed significantly lower levels than the Less Educated Group at 15 months after surgery. Regarding BMI and degree of obesity, the More Educated Group showed significantly lower levels than the Less Educated Group at 15 months after surgery. Here, we were assured that BMI is reversely proportional to the number of personal nutritional educations at 15 months, which is more outstanding after surgery than before surgery. Conclusion: Long-term nutritional education is a key factor for the extremely obese patient in maintaining the effects of Bariatric surgery on weight and body fat reduction onwards. In the next stage, considering the characteristics of the study subjects, adoption of individual nutrition education is recommended for postsurgical prospective arbitration of obesity in order to monitor blood pressure, obesity related complications, symptoms in possession, and how eating habits and health related life habits change, and to judge the actual effect of the nutritional education method at the same time.

Mature Market Sub-segmentation and Its Evaluation by the Degree of Homogeneity (동질도 평가를 통한 실버세대 세분군 분류 및 평가)

  • Bae, Jae-ho
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.27-35
    • /
    • 2010
  • As the population, buying power, and intensity of self-expression of the elderly generation increase, its importance as a market segment is also growing. Therefore, the mass marketing strategy for the elderly generation must be changed to a micro-marketing strategy based on the results of sub-segmentation that suitably captures the characteristics of this generation. Furthermore, as a customer access strategy is decided by sub-segmentation, proper segmentation is one of the key success factors for micro-marketing. Segments or sub-segments are different from sectors, because segmentation or sub-segmentation for micro-marketing is based on the homogeneity of customer needs. Theoretically, complete segmentation would reveal a single voice. However, it is impossible to achieve complete segmentation because of economic factors, factors that affect effectiveness, etc. To obtain a single voice from a segment, we sometimes need to divide it into many individual cases. In such a case, there would be a many segments to deal with. On the other hand, to maximize market access performance, fewer segments are preferred. In this paper, we use the term "sub-segmentation" instead of "segmentation," because we divide a specific segment into more detailed segments. To sub-segment the elderly generation, this paper takes their lifestyles and life stages into consideration. In order to reflect these aspects, various surveys and several rounds of expert interviews and focused group interviews (FGIs) were performed. Using the results of these qualitative surveys, we can define six sub-segments of the elderly generation. This paper uses five rules to divide the elderly generation. The five rules are (1) mutually exclusive and collectively exhaustive (MECE) sub-segmentation, (2) important life stages, (3) notable lifestyles, (4) minimum number of and easy classifiable sub-segments, and (5) significant difference in voices among the sub-segments. The most critical point for dividing the elderly market is whether children are married. The other points are source of income, gender, and occupation. In this paper, the elderly market is divided into six sub-segments. As mentioned, the number of sub-segments is a very key point for a successful marketing approach. Too many sub-segments would lead to narrow substantiality or lack of actionability. On the other hand, too few sub-segments would have no effects. Therefore, the creation of the optimum number of sub-segments is a critical problem faced by marketers. This paper presents a method of evaluating the fitness of sub-segments that was deduced from the preceding surveys. The presented method uses the degree of homogeneity (DoH) to measure the adequacy of sub-segments. This measure uses quantitative survey questions to calculate adequacy. The ratio of significantly homogeneous questions to the total numbers of survey questions indicates the DoH. A significantly homogeneous question is defined as a question in which one case is selected significantly more often than others. To show whether a case is selected significantly more often than others, we use a hypothesis test. In this case, the null hypothesis (H0) would be that there is no significant difference between the selection of one case and that of the others. Thus, the total number of significantly homogeneous questions is the total number of cases in which the null hypothesis is rejected. To calculate the DoH, we conducted a quantitative survey (total sample size was 400, 60 questions, 4~5 cases for each question). The sample size of the first sub-segment-has no unmarried offspring and earns a living independently-is 113. The sample size of the second sub-segment-has no unmarried offspring and is economically supported by its offspring-is 57. The sample size of the third sub-segment-has unmarried offspring and is employed and male-is 70. The sample size of the fourth sub-segment-has unmarried offspring and is not employed and male-is 45. The sample size of the fifth sub-segment-has unmarried offspring and is female and employed (either the female herself or her husband)-is 63. The sample size of the last sub-segment-has unmarried offspring and is female and not employed (not even the husband)-is 52. Statistically, the sample size of each sub-segment is sufficiently large. Therefore, we use the z-test for testing hypotheses. When the significance level is 0.05, the DoHs of the six sub-segments are 1.00, 0.95, 0.95, 0.87, 0.93, and 1.00, respectively. When the significance level is 0.01, the DoHs of the six sub-segments are 0.95, 0.87, 0.85, 0.80, 0.88, and 0.87, respectively. These results show that the first sub-segment is the most homogeneous category, while the fourth has more variety in terms of its needs. If the sample size is sufficiently large, more segmentation would be better in a given sub-segment. However, as the fourth sub-segment is smaller than the others, more detailed segmentation is not proceeded. A very critical point for a successful micro-marketing strategy is measuring the fit of a sub-segment. However, until now, there have been no robust rules for measuring fit. This paper presents a method of evaluating the fit of sub-segments. This method will be very helpful for deciding the adequacy of sub-segmentation. However, it has some limitations that prevent it from being robust. These limitations include the following: (1) the method is restricted to only quantitative questions; (2) the type of questions that must be involved in calculation pose difficulties; (3) DoH values depend on content formation. Despite these limitations, this paper has presented a useful method for conducting adequate sub-segmentation. We believe that the present method can be applied widely in many areas. Furthermore, the results of the sub-segmentation of the elderly generation can serve as a reference for mature marketing.

  • PDF

Detoxification of PSP and relationship between PSP toxicity and Protogonyaulax sp. (마비성패류독의 제독방법 및 패류독성과 원인플랑크톤과의 관계에 관한 연구)

  • CHANG Dong-Suck;SHIN Il-Shik;KIM Ji-Hoe;PYUN Jae-hueung;CHOE Wi-Kung
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.22 no.4
    • /
    • pp.177-188
    • /
    • 1989
  • The purpose of this study was to investigate the detoxifying effect on PSP-infested sea mussel, Mytilus edulis, by heating treatment and correlation between the PSP toxicity and the environmental conditions of shellfish culture area such as temperature, pH, salinity, density of Protogonyaulax sp. and concentration of inorganic nutrients such as $NH_4-N,\;NO_3-N,\;NO_2-N\;and\;PO_4-P$. This experiment was carried out at $Suj\u{o}ng$ in Masan, Yangdo in Jindong, $Hach\u{o}ng\;in\;K\u{o}jedo\;and\;Gamch\u{o}n$ bay in Pusan from February to June in $1987\~1989$. It was observed that the detection ratio and toxicity of PSP in sea mussel were different by the year even same collected area. The PSP was often detected when the temperature of sea water about $8.0\~14.0^{\circ}C$. Sometimes the PSP fox of sea mussel was closely related to density of Protogonyaulax sp. at $Gamch\u{o}n$ bay in Pusan from March to April in 1989, but no relationship was observed except above duration during the study period. The concentration of inorganic nutrients effects on the growth of Protogonyaulax sp., then effects of $NO_3-N$ was the strongest among them. When the PSP-infested sea mussel homogenate was heated at various temperature, the PSP toxicity was not changed significantly at below $70^{\circ}C$ for 60 min. but it was proper-tionaly decreased as the heating temperature was increased. For example, when the sea mussel homogenate was heated at $100^{\circ}C,\;121^{\circ}C$ for 10 min., the toxicity was decreased about $67\%\;and\;90\%$, respectively. On the other hand, when shellstock sea mussel contained PSP of $150{\mu}g/100g$ was boiled at $100^{\circ}C$ for 30 min. with tap water, the toxicity was not detected by mouse assay, but that of PSP of $5400{\mu}g/100g$ was reduced to $57{\mu}g/100g$ even after boiling for 120 min.

  • PDF