• Title/Summary/Keyword: Space net

Search Result 464, Processing Time 0.022 seconds

Impact of Urban Thermal Environment Improvement by Street Trees and Pavement Surface Albedo (가로수와 바닥 포장 표면 알베도의 도시 열 환경 개선 효과)

  • Na-youn Kim;Eun-sub Kim;Seok-hwan Yun;Zheng-gang Piao;Sang-hyuck Kim;Sang-jun Nam;Hwa-Jun Jea;Dong-kun Lee
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.26 no.1
    • /
    • pp.47-59
    • /
    • 2023
  • Due to climate change and urbanization, abnormally high temperatures and heat waves are expected to increase in urban and deteriorate thermal comfort. Planting of street trees and changing the albedo of urban surfaces are the strategies for mitigating the thermal environment of urban, and both of these strategies affect the exposure and blocking of radiative fluxes to pedestrians. After measuring the shortwave and longwave radiation according to the ground surface with different albedo and the presence of street trees using the CNR4 net radiometer, this study analyzed the relationship between this two strategies in terms of thermal environment mitigation by calculating the MRT(Mean Radiant Temperature) of each environment. As a result of comparing the difference between the downward shortwave radiation measured under the right tree and at the control, the shortwave radiation blocking effect of the tree increased as the downward shortwave radiation increased. During daytime hours (from 11 am to 3 pm), the MRT difference caused by the albedo difference(The albedo of the surfaces are 0.479 and 0.131, respectively.) on surfaces with no tree is approximately 3.58℃. When tree is present, the MRT difference caused by the albedo difference is approximately 0.49℃. In addition, in the case of the light-colored ground surface with high albedo, the surface temperature was low and the range of temperature change was lower than the surrounding surface with low albedo. This result shows that the urban thermal environment can be midigate through the planting of street trees, and that the ground surface with high albedo can be considered for short pedestrians. These results can be utilized in planning street and open space in urban by choosing surfaces with high albedo along with the shading effect of vegetation, considering the use by various users.

Development of a polystyrene phantom for quality assurance of a Gamma Knife®

  • Yona Choi;Kook Jin Chun;Jungbae Bahng;Sang Hyoun Choi;Gyu Seok Cho;Tae Hoon Kim;Hye Jeong Yang;Yeong Chan Seo;Hyun-Tai Chung
    • Nuclear Engineering and Technology
    • /
    • v.55 no.8
    • /
    • pp.2935-2940
    • /
    • 2023
  • A polystyrene phantom was developed following the guidance of the International Atomic Energy Association (IAEA) for gamma knife (GK) quality assurance. Its performance was assessed by measuring the absorbed dose rate to water and dose distributions. The phantom was made of polystyrene, which has an electron density (1.0156) similar to that of water. The phantom included one outer phantom and four inner phantoms. Two inner phantoms held PTW T31010 and Exradin A16 ion chambers. One inner phantom held a film in the XY plane of the Leksell coordinate system, and another inner phantom held a film in the YZ or ZX planes. The absorbed dose rate to water and beam profiles of the machine-specific reference (msr) field, namely, the 16 mm collimator field of a GK PerfexionTM or IconTM, were measured at seven GK sites. The measured results were compared to those of an IAEA-recommended solid water (SW) phantom. The radius of the polystyrene phantom was determined to be 7.88 cm by converting the electron density of the plastic, considering a water depth of 8 g/cm2. The absorbed dose rates to water measured in both phantoms differed from the treatment planning program by less than 1.1%. Before msr correction, the PTW T31010 dose rates (PTW Freiberg GmbH, New York, NY, USA) in the polystyrene phantom were 0.70 (0.29)% higher on average than those in the SW phantom. The Exradin A16 (Standard Imaging, Middleton, WI, USA) dose rates were 0.76 (0.32)% higher in the polystyrene phantom. After msr correction factors were applied, there were no statistically significant differences in the A16 dose rates measured in the two phantoms; however, the T31010 dose rates were 0.72 (0.29)% higher in the polystyrene phantom. When the full widths at half maximum and penumbras of the msr field were compared, no significant differences between the two phantoms were observed, except for the penumbra in the Y-axis. However, the difference in the penumbra was smaller than variations among different sites. A polystyrene phantom developed for gamma knife dosimetry showed dosimetric performance comparable to that of a commercial SW phantom. In addition to its cost effectiveness, the polystyrene phantom removes air space around the detector. Additional simulations of the msr correction factors of the polystyrene phantom should be performed.

Prediction of the remaining time and time interval of pebbles in pebble bed HTGRs aided by CNN via DEM datasets

  • Mengqi Wu;Xu Liu;Nan Gui;Xingtuan Yang;Jiyuan Tu;Shengyao Jiang;Qian Zhao
    • Nuclear Engineering and Technology
    • /
    • v.55 no.1
    • /
    • pp.339-352
    • /
    • 2023
  • Prediction of the time-related traits of pebble flow inside pebble-bed HTGRs is of great significance for reactor operation and design. In this work, an image-driven approach with the aid of a convolutional neural network (CNN) is proposed to predict the remaining time of initially loaded pebbles and the time interval of paired flow images of the pebble bed. Two types of strategies are put forward: one is adding FC layers to the classic classification CNN models and using regression training, and the other is CNN-based deep expectation (DEX) by regarding the time prediction as a deep classification task followed by softmax expected value refinements. The current dataset is obtained from the discrete element method (DEM) simulations. Results show that the CNN-aided models generally make satisfactory predictions on the remaining time with the determination coefficient larger than 0.99. Among these models, the VGG19+DEX performs the best and its CumScore (proportion of test set with prediction error within 0.5s) can reach 0.939. Besides, the remaining time of additional test sets and new cases can also be well predicted, indicating good generalization ability of the model. In the task of predicting the time interval of image pairs, the VGG19+DEX model has also generated satisfactory results. Particularly, the trained model, with promising generalization ability, has demonstrated great potential in accurately and instantaneously predicting the traits of interest, without the need for additional computational intensive DEM simulations. Nevertheless, the issues of data diversity and model optimization need to be improved to achieve the full potential of the CNN-aided prediction tool.

Quantifying forest resource change on the Korean Peninsula using satellite imagery and forest growth models (위성영상과 산림생장모형을 활용한 한반도 산림자원 변화 정량화)

  • Moonil Kim;Taejin Park
    • Korean Journal of Environmental Biology
    • /
    • v.42 no.2
    • /
    • pp.193-206
    • /
    • 2024
  • This study aimed to quantify changes in forest cover and carbon storage of Korean Peninsular during the last two decades by integrating field measurement, satellite remote sensing, and modeling approaches. Our analysis based on 30-m Landsat data revealed that the forested area in Korean Peninsular had diminished significantly by 478,334 ha during the period of 2000-2019, with South Korea and North Korea contributing 51.3% (245,725 ha) and 48.6% (232,610 ha) of the total change, respectively. This comparable pattern of forest loss in both South Korea and North Korea was likely due to reduced forest deforestation and degradation in North Korea and active forest management activity in South Korea. Time series of above ground biomass (AGB) in the Korean Peninsula showed that South and North Korean forests increased their total AGB by 146.4Tg C (AGB at 2020=357.9Tg C) and 140.3Tg C (AGB at 2020=417.4Tg C), respectively, during the last two decades. This could be translated into net AGB increases in South and North Korean forests from 34.8 and 29.4 Mg C ha-1 C to 58.9(+24.1) and 44.2(+14.8) Mg C ha-1, respectively. It indicates that South Korean forests are more productive during the study period. Thus, they have sequestered more carbon. Our approaches and results can provide useful information for quantifying national scale forest cover and carbon dynamics. Our results can be utilized for supporting forest restoration planning in North Korea

Computer Vision Approach for Phenotypic Characterization of Horticultural Crops (컴퓨터 비전을 활용한 토마토, 파프리카, 멜론 및 오이 작물의 표현형 특성화)

  • Seungri Yoon;Minju Shin;Jin Hyun Kim;Ho Jeong Jeong;Junyoung Park;Tae In Ahn
    • Journal of Bio-Environment Control
    • /
    • v.33 no.1
    • /
    • pp.63-70
    • /
    • 2024
  • This study explored computer vision methods using the OpenCV open-source library to characterize the phenotypes of various horticultural crops. In the case of tomatoes, image color was examined to assess ripeness, while support vector machine (SVM) and histogram of oriented gradients (HOG) methods effectively identified ripe tomatoes. For sweet pepper, we visualized the color distribution and used the Gaussian mixture model for clustering to analyze its post-harvest color characteristics. For the quality assessment of netted melons, the LAB (lightness, a, b) color space, binary images, and depth mapping were used to measure the net patterns of the melon. In addition, a combination of depth and color data proved successful in identifying flowers of different sizes and distances in cucumber greenhouses. This study highlights the effectiveness of these computer vision strategies in monitoring the growth and development, ripening, and quality assessment of fruits and vegetables. For broader applications in agriculture, future researchers and developers should enhance these techniques with plant physiological indicators to promote their adoption in both research and practical agricultural settings.

High Density Tilapia Culture in a Recirculating Water System without Filter Bed (무여과순환수 탱크 이용 Tilapia의 고밀도 사육실험)

  • KIM In-Bae
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.16 no.2
    • /
    • pp.59-67
    • /
    • 1983
  • An experiment on the rearing of tilapia stocked in closed recirculating tanks eliminating biological filter beds was carried out at the Fish Culture Experiment Station of the National Fisheries University of Pusan, from May 18 through October 21, 1982, and the growth rates, feed conversion, water quality, spawning prevention and space utilization efficiency were discussed. Finally discussed is the feasibility on the establishment of commercial production units. On the water quality, the water temperature ranged from $22.8^{\circ}C\;to\;29.1^{\circ}C$, and total ammonia arround 10 ppm or slightly up. Maintaining phytoplankton bloom was not successful probably because of the active consumption by the heavily stocked tilapia. Several attempts were made by changing the culture water with green water from a nearby earthen pond with results of fading-away in a couple of days. Feed conversions were relatively high ranging from 0.9 to 1.2 except for experiment 1 when the fish were not fully recovered from weakened wintering state. The feed used was partly laboratory prepared $25\%$ protein diet and mostly commercially available $39\%$ protein carp feed. Spawning was completely controlled during the experiment, resulting from density effect, which ranged from 10kg to 40.7kg per square meter with water depth of 0.5 to 0.6m. Space utilization efficiency was very high. Daily net production from the experiment division 3, which showed the highest result, was 6.206 kg per tank, which is calculated 3,235 metric tons per hectare per year, This time, water temperature ranged from 27.8 to $29.1^{circ}C$, average being $28.4^{circ}C$, and total ammonia arround 10 ppm. An estimation for the commercial set-up of the production system based on the results of experiment divisions which had initial stocking rate $15\;kg/m^2$ or up, is made. If the total facility, 8 tanks comprising $56\;m^2$ in surface area, is used for the present study, the yield would become 5,639 kg from 200 day rearing, which would be possible under double sheets vinyl house without additional heating, and it is thought feasible in the economic view point, when 10 or more units are operated.

  • PDF

The Limitation of Air Carriers' Cargo and Baggage Liability in International Aviation Law: With Reference to the U.S. Courts' Decisions (국제항공법상 화물.수하물에 대한 운송인의 책임상한제도 - 미국의 판례 분석을 중심으로 -)

  • Moon, Joon-Jo
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.22 no.2
    • /
    • pp.109-133
    • /
    • 2007
  • The legal labyrinth through which we have just walked is one in which even a highly proficient lawyer could easily become lost. Warsaw Convention's original objective of uniformity of private international aviation liability law has been eroded as the world community ha attempted again to address perceived problems. Efforts to create simplicity and certainty of recovery actually may have created less of both. In any particular case, the issue of which international convention, intercarrier agreement or national law to apply will likely be inconsistent with other decisions. The law has evolved faster for some nations, and slower for others. Under the Warsaw Convention of 1929, strict liability is imposed on the air carrier for damage, loss, or destruction of cargo, luggage, or goods sustained either: (1) during carriage in air, which is comprised of the period during which cargo is 'in charge of the carrier (a) within an aerodrome, (b) on board the aircraft, or (c) in any place if the aircraft lands outside an aerodrome; or (2) as a result of delay. By 2007, 151 nations had ratified the original Warsaw Convention, 136 nations had ratified the Hague Protocol, 84 had ratified the Guadalajara Protocol, and 53 nations had ratified Montreal Protocol No.4, all of which have entered into force. In November 2003, the Montreal Convention of 1999 entered into force. Several airlines have embraced the Montreal Agreement or the IATA Intercarrier Agreements. Only seven nations had ratified the moribund Guatemala City Protocol. Meanwhile, the highly influential U.S. Second Circuit has rendered an opinion that no treaty on the subject was in force at all unless both affected nations had ratified the identical convention, leaving some cases to fall between the cracks into the arena of common law. Moreover, in the United States, a surface transportation movement prior or subsequent to the air movement may, depending upon the facts, be subject to Warsaw, or to common law. At present, International private air law regime can be described as a "situation of utter chaos" in which "even legal advisers and judges are confused." The net result of this barnacle-like layering of international and domestic rules, standards, agreements, and criteria in the elimination of legal simplicity and the substitution in its stead of complexity and commercial uncertainty, which manifestly can not inure to the efficient and economical flow of world trade. All this makes a strong case for universal ratification of the Montreal Convention, which will supersede the Warsaw Convention and its various reformulations. Now that the Montreal Convention has entered into force, the insurance community may press the airlines to embrace it, which in turn may encourage the world's governments to ratify it. Under the Montreal Convention, the common law defence is available to the carrier even when it was not the sole cause of the loss or damage, again making way for the application of comparative fault principle. Hopefully, the recent entry into force of the Montreal Convention of 1999 will re-establish the international legal uniformity the Warsaw Convention of 1929 sought to achieve, though far a transitional period at least, the courts of different nations will be applying different legal regimes.

  • PDF

Simulation and Feasibility Analysis of Aging Urban Park Refurbishment Project through the Application of Japan's Park-PFI System (일본 공모설치관리제도(Park-PFI)의 적용을 통한 노후 도시공원 정비사업 시뮬레이션 및 타당성 분석)

  • Kim, Yong-Gook;Kim, Young-Hyeon;Kim, Min-Seo
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.5
    • /
    • pp.13-29
    • /
    • 2023
  • Urban parks are social infrastructure supporting citizens' health, quality of life, and community formation. As the proportion of urban parks that have been established for more than 20 years is increasing, the need for refurbishment to improve the physical space environment and enhance the functions of aging urban parks is increasing. Since the government's refurbishment of aging urban parks has limitations in securing financial resources and promoting attractiveness, they must be promoted through public-private partnerships. Japan, which suffered from the problem of aging urban parks, has successfully promoted several park refurbishment projects by introducing the Park-PFI through the revision of the 「Urban Park Act」 in 2017. This study examines and analyzes the characteristics of the Japan Park-PFI as an alternative to improving the quality of aging domestic urban park services through public-private partnerships and the validity of the aging urban park refurbishment projects through Park-PFI. The main findings are as follows. First, it is necessary to start discussions on introducing Japan's Park-PFI according to the domestic conditions as a means of public-private partnership to improve the service quality and diversify the functions of aging urban parks. In order to introduce Park-PFI social discussions and follow-up studies on the deterioration of urban parks. Must be conducted. The installation of private capital and profit facilities and improvements of related regulations, such as the 「Parks and Green Spaces Act」 and the 「Public Property Act」, is required. Second, it is judged that the Park-PFI project is a policy alternative that can enhance the benefits to citizens, local governments, and private operators under the premise that the need to refurbish aging urban parks is high and the location is suitable for promoting the project. As a result of a pilot application of the Park-PFI project to Seyeong Park, an aging urban park located in Bupyeong-gu, Incheon, it was analyzed to be profitable in terms of the profitability index (PI), net present value (FNPV), and internal rate of return (FIRR). It is considered possible to participate in the business sector. At the local government level, private capital is used to improve the physical space environment of aging urban parks, as well as the refurbishment of the urban parks by utilizing financial resources generated by returning a portion of the facility usage fees and profits (0.5% of annual sales) of private operators. It was found that management budgets could be secured.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

A Study of Sales Changes of Convenience Stores and Ratio Changes in the Composition of Business Types within Trading Areas of SSM (SSM 상권내의 업종 비율 변화와 편의점 매출액 변화에 대한 연구)

  • Cho, Chun-Han;Ahn, Seung-Ho
    • Journal of Distribution Research
    • /
    • v.16 no.5
    • /
    • pp.193-209
    • /
    • 2011
  • The fast expansion of super supermarket(SSM) in Korean retail industries has attracted serious social attentions and some types of regulations to slow down its growth are prepared. However, the regulations are hardly justified because they attempt to establish entry barriers which are not recommendable economic policy. Accordingly, the regulations should be justified at least on the basis of social and political causes. The study interprets the social and political causes as the effects of entry of SSM on trading ares where SSM is located. The study is distinguished from the past studies which focused only on intertype and intratype competition between retailers Another goal of the study is to complement the weakness of past studies and provide additional information to settle the issues. More closely, the study investigates the relationships between the changes in sales of convenience stores, which may be a surrogate measure of the viability of a local economy, and the changes in the composition of business types within 500m radius of a SSM. Further, the study investigates the effects of the establishment of SSM and the retail sales index on the sales of convenience stores. The study analyzed the panel data and adopts Swamy's random coefficient models. The results show that the effects of the establishment of SSM on the sales of convenience stores are not statistically significant. The relationship between the change in the portion of restaurants among the local business and the change in the sales of convenience stores is positive. On the other hand the relationship between the change in the portion of retailers in the composition of local businesses and the change in the sales of convenience stores is negative. In conclusion, even though any negative effects of the establishments of SSM on local economies are expected, as long as other types business especially restaurant businesses fill the space left by retailers, the net effect on the local economy may not be signification or even positive.

  • PDF