Monitoring of Radioactivity and Heavy Metal Contamination of Dried Processed Fishery Products (건조 수산가공식품의 방사능 및 중금속 오염도 조사)
-
- Journal of Food Hygiene and Safety
- /
- v.36 no.3
- /
- pp.248-256
- /
- 2021
A total of 120 samples corresponding to 12 categories of dried processed fishery products distributed in Gyeonggi-do were examined for radioactivity contamination (131I, 134Cs, 137Cs) and heavy metals (lead, cadmium, arsenic, and mercury). One natural radioactive material, 40K, was detected in all products, while the artificial radioactive materials 131I, 134Cs and 137Cs were not detected at above MDA (minimum detectable activity) values. The detection ranges of heavy metals converted by biological basis were found as follows: Pb (N.D.-0.332 mg/kg), Cd (N.D.-2.941 mg/kg), As (0.371-15.007 mg/kg), Hg (0.0005-0.0621 mg/kg). Heavy metals were detected within standard levels when there was an acceptable standard, but the arsenic content was high in most products, although none of the products had a permitted level of arsenic. In the case of dried processed fishery products, there are products that are consumed by restoring moisture to its original state, but there are also many products that are consumed directly in the dry state, so it will be necessary to set permitted levels for heavy metals considering this situation in the future. In addition, Japan has decided to release contaminated water from the Fukushima nuclear power plant into the ocean, so there is high public concern about radioactivity contamination of food, including fishery products. Therefore, continuous monitoring of various food items will be necessary to ease consumers' anxiety.
This study was conducted to investigate the effect of ventilation at high temperature on the control of powdery mildew, silverleaf whitefly two-spotted spider mite occurred at Korean melon cultivation greenhouse, and on leaf rolling and flowering of the plant in summer season. 'Alchanggul' grafted onto 'Hidden Power' rootstock was planted on soil bed with the distance of 40 cm. Three ventilation temperatures of 45℃, 40℃, and 35℃ as set points were compared. Ventilation treatment was done by control of side window operation from 18th June to 13th July when silverleaf whitefly, mite, and powdery mildew were occurred in all greenhouses. The temperature inside greenhouse was increased up to the set temperature point on sunny days and maintained for about 9 hours with high relative humidity at 45℃ condition. The differences of day maximum air temperature and day minimum RH were the highest at 45℃ treatment. After 11 days of treatments, the damage by powdery mildew and two-spotted spider mite was almost recovered at 45℃ treatment but not at 40 and 35℃. The population of silverleaf whitefly and two-spotted spider mite were significantly decreased at 45℃ treatment at 14 days after treatment, while powdery mildew symptom was not significantly decreased. Leaf rolling was observed at high temperature but not severe at 45℃ treatment. After 26 days of treatments, female flowers did not bloom at all at 45℃ treatment, and the number of male flowers was 1.2 among 15 nodes of newly grown shoots. As the result, it indicates that ventilation at the high temperature of 45℃ for about 2 to 3 weeks can be an applicable method to control above mentioned pests and disease, and to recover the vegetative growth of Korean melon by reducing flowering of the plant.
This study was conducted to investigate the effect of each light intensity and photoperiod combination on the growth and glucosinolates (GSLs) content of three species of Brassicaceae plants under the same daily light integral (DLI) conditions. Seeds of leaf mustard (Brassica juncea (L.) Czern.), red mustard(Brassica juncea L.) and kale (Brassica oleracea L. var. acephala (DC.) Alef.) were sown in a rockwool cubes and grown for three weeks. DLI was set to 10 mol·m-2·d-1 and treated with 10h-280, 14h-200, 18h-155, 22h-127 µmol·m-2·s-1 for three weeks. As a result at 14h-200 µmol·m-2·s-1 treatment, shoot fresh/dry weight, the number of leaves, and leaf area were increased in leaf mustard and kale but there was no significant difference in other treatments. In the total GSLs content, the treatment of 14h-200 µmol·m-2·s-1 increased significantly 139.95, 135.87, 154.03% compared to 10h-280, 18h-155, 22h-127 µmol·m-2·s-1 treatment in red mustard, and 14h-200 µmol·m-2·s-1 treatment increased significantly 132.96, 132.96, 134.03% compared to other treatments in kale. In red mustard, the treatment of 18h-155 µmol·m-2·s-1 showed an increase in shoot fresh/dry weight and the total GSLs contents than other photoperiods and 14h-200 µmol·m-2·s-1 treatment, the number of leaves significantly 15.62, 12.12, and 32.14% higher than other photoperiods. Since the DLI response is different depending on species even for similar Brassicaceae crops, it is necessary to get more detailed results by conducting optical light quality studies and deriving optimal DLI conditions to achieve minimum power consumption and maximum efficiency.
Spatial sampling design plays an important role in GIS-based modeling studies because it increases modeling efficiency while reducing the cost of sampling. In the field of agricultural systems, research demand for high-resolution spatial databased modeling to predict and evaluate climate change impacts is growing rapidly. Accordingly, the need and importance of spatial sampling design are increasing. The purpose of this study was to design spatial sampling of paddy fields (11,386 grids with 1 km spatial resolution) in Korea for use in agricultural spatial modeling. A stratified random sampling design was developed and applied in 2030s, 2050s, and 2080s under two RCP scenarios of 4.5 and 8.5. Twenty-five weather and four soil characteristics were used as stratification variables. Stratification and sample allocation were optimized to ensure minimum sample size under given precision constraints for 16 target variables such as crop yield, greenhouse gas emission, and pest distribution. Precision and accuracy of the sampling were evaluated through sampling simulations based on coefficient of variation (CV) and relative bias, respectively. As a result, the paddy field could be optimized in the range of 5 to 21 strata and 46 to 69 samples. Evaluation results showed that target variables were within precision constraints (CV<0.05 except for crop yield) with low bias values (below 3%). These results can contribute to reducing sampling cost and computation time while having high predictive power. It is expected to be widely used as a representative sample grid in various agriculture spatial modeling studies.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
As the population, buying power, and intensity of self-expression of the elderly generation increase, its importance as a market segment is also growing. Therefore, the mass marketing strategy for the elderly generation must be changed to a micro-marketing strategy based on the results of sub-segmentation that suitably captures the characteristics of this generation. Furthermore, as a customer access strategy is decided by sub-segmentation, proper segmentation is one of the key success factors for micro-marketing. Segments or sub-segments are different from sectors, because segmentation or sub-segmentation for micro-marketing is based on the homogeneity of customer needs. Theoretically, complete segmentation would reveal a single voice. However, it is impossible to achieve complete segmentation because of economic factors, factors that affect effectiveness, etc. To obtain a single voice from a segment, we sometimes need to divide it into many individual cases. In such a case, there would be a many segments to deal with. On the other hand, to maximize market access performance, fewer segments are preferred. In this paper, we use the term "sub-segmentation" instead of "segmentation," because we divide a specific segment into more detailed segments. To sub-segment the elderly generation, this paper takes their lifestyles and life stages into consideration. In order to reflect these aspects, various surveys and several rounds of expert interviews and focused group interviews (FGIs) were performed. Using the results of these qualitative surveys, we can define six sub-segments of the elderly generation. This paper uses five rules to divide the elderly generation. The five rules are (1) mutually exclusive and collectively exhaustive (MECE) sub-segmentation, (2) important life stages, (3) notable lifestyles, (4) minimum number of and easy classifiable sub-segments, and (5) significant difference in voices among the sub-segments. The most critical point for dividing the elderly market is whether children are married. The other points are source of income, gender, and occupation. In this paper, the elderly market is divided into six sub-segments. As mentioned, the number of sub-segments is a very key point for a successful marketing approach. Too many sub-segments would lead to narrow substantiality or lack of actionability. On the other hand, too few sub-segments would have no effects. Therefore, the creation of the optimum number of sub-segments is a critical problem faced by marketers. This paper presents a method of evaluating the fitness of sub-segments that was deduced from the preceding surveys. The presented method uses the degree of homogeneity (DoH) to measure the adequacy of sub-segments. This measure uses quantitative survey questions to calculate adequacy. The ratio of significantly homogeneous questions to the total numbers of survey questions indicates the DoH. A significantly homogeneous question is defined as a question in which one case is selected significantly more often than others. To show whether a case is selected significantly more often than others, we use a hypothesis test. In this case, the null hypothesis (H0) would be that there is no significant difference between the selection of one case and that of the others. Thus, the total number of significantly homogeneous questions is the total number of cases in which the null hypothesis is rejected. To calculate the DoH, we conducted a quantitative survey (total sample size was 400, 60 questions, 4~5 cases for each question). The sample size of the first sub-segment-has no unmarried offspring and earns a living independently-is 113. The sample size of the second sub-segment-has no unmarried offspring and is economically supported by its offspring-is 57. The sample size of the third sub-segment-has unmarried offspring and is employed and male-is 70. The sample size of the fourth sub-segment-has unmarried offspring and is not employed and male-is 45. The sample size of the fifth sub-segment-has unmarried offspring and is female and employed (either the female herself or her husband)-is 63. The sample size of the last sub-segment-has unmarried offspring and is female and not employed (not even the husband)-is 52. Statistically, the sample size of each sub-segment is sufficiently large. Therefore, we use the z-test for testing hypotheses. When the significance level is 0.05, the DoHs of the six sub-segments are 1.00, 0.95, 0.95, 0.87, 0.93, and 1.00, respectively. When the significance level is 0.01, the DoHs of the six sub-segments are 0.95, 0.87, 0.85, 0.80, 0.88, and 0.87, respectively. These results show that the first sub-segment is the most homogeneous category, while the fourth has more variety in terms of its needs. If the sample size is sufficiently large, more segmentation would be better in a given sub-segment. However, as the fourth sub-segment is smaller than the others, more detailed segmentation is not proceeded. A very critical point for a successful micro-marketing strategy is measuring the fit of a sub-segment. However, until now, there have been no robust rules for measuring fit. This paper presents a method of evaluating the fit of sub-segments. This method will be very helpful for deciding the adequacy of sub-segmentation. However, it has some limitations that prevent it from being robust. These limitations include the following: (1) the method is restricted to only quantitative questions; (2) the type of questions that must be involved in calculation pose difficulties; (3) DoH values depend on content formation. Despite these limitations, this paper has presented a useful method for conducting adequate sub-segmentation. We believe that the present method can be applied widely in many areas. Furthermore, the results of the sub-segmentation of the elderly generation can serve as a reference for mature marketing.
The US supports the Information and Communication (IC) industry as a strategic one to wield a complete power over the World Market. However, several other countries are also eager to have the support for the IC industry because the industry produces a high added value and has a significant effect on other industries. Korea is not an exception. Korea recently succeeded in the commercialization of CDMA for the first time in the world, after the successful development of TDX. Hence, it is highly likely to get tracked by the US. Although the IC industry is a specific sector of IT, there is a concern that there might be a trade friction between the US and Korea due to a possible competition. It will be very important to prepare a solution in advance so that Korea could prevent the friction and at the same time increase its share domestically and globally. It will be our important task to solve the problem with the minimum cost if the conflict arises unfortunately in the IT area. The parties that have a strong influence on the US trade policy are the think tank group and the IT-related interest group. Therefore, it would be important to have a close relationship with them. We found some implications by analyzing the case of Japan, which has experienced trade frictions with the US over the long period of time in the high tech industry. In order to get rid of those conflicts with the US, the Japanese did the following things : (1) The Japanese government developed supporting theories and also resorted to international support so that the world could support the Japanese theories. (2) Through continual dialogue with the US business people, the Japanese business people sought after solutions to share profits among the Japanese and the US both in the domestic and in the worldwide markets. They focused on lobbying activities to influence the US public opinion to support the Japanese. The specific implementation plan was first to open culture lobby toward opinion leaders who were leaders about the US opinion. The institution, Japan Society, were formed to deliver a high quality lobbying activities. The second plan is economic lobby. They have established Japanese Economic Institute at Washington. They provide information about Japan regularly or irregularly to the US government, research institution, universities, etc., that are interested in Japan. The main objective behind these activities though is to advertise the validity of Japanese policy. Japanese top executives, practical interest groups on international trade, are trying to justify their position by direct contact with the US policy makers. The third one is political lobby. Japan is very careful about this political lobby. It is doing its best not to give impression that Japan is trying to shape the US policy making. It is collecting a vast amount of information to make a correct judgment on situation. It is not tilted toward one political party or the other, and is rather developing a long-term network of people who understand and support the Japanese policy. The following implications were drawn from the experience of Japan. First, the Korean government should develop a long-term plan and execute it to improve the Korean image perceived by American people. Second, the Korean government should begin public relation activities toward the US elite group. It is inevitable to make an effort to advertise Korea to this elite group because this group leads public opinion in the USA. Third, the Korean government needs the development of a relevant policy to elevate the positive atmosphere for advertising toward the US. For example, we need information about to whom and how to about lobbying activities, personnel network who immediately respond to wrong articles about Korea in the US press, and lastly the most recent data bank of Korean support group inside the USA. Fourth, the Korean government should create an atmosphere to facilitate the advertising toward the US. Examples include provision of incentives in tax on the expenses for the advertising toward the US and provision of rewards to those who significantly contribute to the advertising activities. Fifth, the Korean government should perform the role of a bridge between Korean and the US business people. Sixth, the government should promptly analyze the policy of IT industry, a strategic area, and timely distribute information to industries in Korea. Since the Korean government is the only institution that has formal contact with the US government, it is highly likely to provide information of a high quality. The followings are some implications for business institutions. First, Korean business organization should carefully analyze and observe the business policy and managerial conditions of US companies. It is very important to do so because all the trade frictions arise at the business level. Second, it is also very important that the top management of Korean firms contact the opinion leaders of the US. Third, it is critically needed that Korean business people sent to the USA do their part for PR activities. Fourth, it is very important to advertise to American employees in Korean companies. If we cannot convince our American employees, it would be a lot harder to convince regular American. Therefore, it is very important to make the American employees the support group for Korean ways. Fifth, it should try to get much information as early as possible about the US firms policy in the IT area. It should give an enormous effort on early collection of information because by doing so it has more time to respond. Sixth, it should research on the PR cases of foreign enterprise or non-American companies inside the USA. The research needs to identify the success factors and the failure factors. Finally, the business firm will get more valuable information if it analyzes and responds to, according to each medium.
This research was conducted to examine the feasibility of developing fire retardant particleboard and complyboard. Particleboard were manufactured using meranti particle(Shorea spp.)made with Pallmann chipper, and complyboard meranti particle and apitong veneer (Dipterocarpus spp.). Particles were passed through 4mm (6 mesh) and retained on 1mm (25 mesh). Urea formaldehyde resin was added 10 percent on ovendry weight of particle. Face veneer for complyboard was 0.9, 1.6 and 2.3mm in thickness and spread with 36 g/(30.48 cm)
As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 (