• Title/Summary/Keyword: Second-hand

Search Result 2,098, Processing Time 0.029 seconds

The Requirement and Effect of the Document of Carriage in Respect of the International Carriage of Cargo by Air (국제항공화물운송에 관한 운송증서의 요건 및 효력)

  • Lee, Kang-Bin
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.23 no.2
    • /
    • pp.67-92
    • /
    • 2008
  • The purpose of this paper is to research the requirements and effect of the document of carriage in respect of the carriage of cargo by air under the Montreal Convention of 1999, IATA Conditions of Carriage for Cargo, and the judicial precedents of Korea and foreign countries. Under the Article 4 of Montreal Convention, in respect of the carriage of cargo, an air waybill shall be delivered. If any other means which preserves a record of the carriage are used, the carrier shall, if so requested by the consignor, deliver to the consignor a cargo receipt. Under the Article 7 of Montreal convention, the air waybill shall be made out by the consignor. If, at the request of the consignor, the carrier makes it out, the carrier shall be deemed to have done so on behalf of the consignor. The air waybill shall be made out in three original parts. The first part shall be marked "for the carrier", and shall be signed by the consignor. The second part shall be marked "for the consignee", and shall be signed by the consignor and by the carrier. The third part shall be signed by the carrier who shall hand it to the consignor after the goods have been accepted. Under the Article 5 of Montreal Convention, the air waybill or the cargo receipt shall include (a) an indication of the places of departure and destination, (b) an indication of at least one agreed stopping place, (c) an indication of the weight of the consignment. Under the Article 10 of Montreal Convention, the consignor shall indemnify the carrier against all damages suffered by the carrier or any other person to whom the carrier is liable, by reason of the irregularity, incorrectness or incompleteness of the particulars and statement furnished by the consignor or on its behalf. Under the Article 9 of Montreal Convention, non-compliance with the Article 4 to 8 of Montreal Convention shall not affect the existence of the validity of the contract, which shall be subject to the rules of Montreal Convention including those relating to limitation of liability. The air waybill is not a document of title or negotiable instrument. Under the Article 11 of Montreal Convention, the air waybill or cargo receipt is prima facie evidence of the conclusion of the contract, of the acceptance of the cargo and of the conditions of carriage. Under the Article 12 of Montreal Convention, if the carrier carries out the instructions of the consignor for the disposition of the cargo without requiring the production of the part of the air waybill or the cargo receipt, the carrier will be liable, for any damage which may be accused thereby to any person who is lawfully in possession of that part of the air waybill or the cargo receipt. According to the precedent of Korea Supreme Court sentenced on 22 July 2004, the freight forwarder as carrier was not liable for the illegal delivery of cargo to the notify party (actual importer) on the air waybill by the operator of the bonded warehouse because the freighter did not designate the boned warehouse and did not hold the position of employer to the operator of the bonded warehouse. In conclusion, as the Korea Customs Authorities will drive the e-Freight project for the carriage of cargo by air, the carrier and freight forwarder should pay attention to the requirements and legal effect of the electronic documentation of the carriage of cargo by air.

  • PDF

A Study on the Differences of Information Diffusion Based on the Type of Media and Information (매체와 정보유형에 따른 정보확산 차이에 대한 연구)

  • Lee, Sang-Gun;Kim, Jin-Hwa;Baek, Heon;Lee, Eui-Bang
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.133-146
    • /
    • 2013
  • While the use of internet is routine nowadays, users receive and share information through a variety of media. Through the use of internet, information delivery media is diversifying from traditional media of one-way communication, such as newspaper, TV, and radio, into media of two-way communication. In contrast of traditional media, blogs enable individuals to directly upload and share news, which can be considered to have a differential speed of information diffusion than news media that convey information unilaterally. Therefore this Study focused on the difference between online news and social media blogs. Moreover, there are variations in the speed of information diffusion because that information closely related to one person boosts communications between individuals. We believe that users' standard of evaluation would change based on the types of information. As well, the speed of information diffusion would change based on the level of proximity. Therefore, the purpose of this study is to examine the differences in information diffusion based on the types of media. And then information is segmentalized and an examination is done to see how information diffusion differentiates based on the types of information. This study used the Bass diffusion model, which has been frequently used because this model has higher explanatory power than other models by explaining diffusion of market through innovation effect and imitation effect. Also this model has been applied a lot in other information diffusion related studies. The Bass diffusion model includes an innovation effect and an imitation effect. Innovation effect measures the early-stage impact, while the imitation effect measures the impact of word of mouth at the later stage. According to Mahajan et al. (2000), Innovation effect is emphasized by usefulness and ease-of-use, as well Imitation effect is emphasized by subjective norm and word-of-mouth. Also, according to Lee et al. (2011), Innovation effect is emphasized by mass communication. According to Moore and Benbasat (1996), Innovation effect is emphasized by relative advantage. Because Imitation effect is adopted by within-group influences and Innovation effects is adopted by product's or service's innovation. Therefore, ours study compared online news and social media blogs to examine the differences between media. We also choose different types of information including entertainment related information "Psy Gentelman", Current affair news "Earthquake in Sichuan, China", and product related information "Galaxy S4" in order to examine the variations on information diffusion. We considered that users' information proximity alters based on the types of information. Hence, we chose the three types of information mentioned above, which have different level of proximity from users' standpoint, in order to examine the flow of information diffusion. The first conclusion of this study is that different media has similar effect on information diffusion, even the types of media of information provider are different. Information diffusion has only been distinguished by a disparity between proximity of information. Second, information diffusions differ based on types of information. From the standpoint of users, product and entertainment related information has high imitation effect because of word of mouth. On the other hand, imitation effect dominates innovation effect on Current affair news. From the results of this study, the flow changes of information diffusion is examined and be applied to practical use. This study has some limitations, and those limitations would be able to provide opportunities and suggestions for future research. Presenting the difference of Information diffusion according to media and proximity has difficulties for generalization of theory due to small sample size. Therefore, if further studies adopt to a request for an increase of sample size and media diversity, difference of the information diffusion according to media type and information proximity could be understood more detailed.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

A study on Development Process of Fish Aquaculture in Japan - Case by Seabream Aquaculture - (일본 어류 양식업의 발전과정과 산지교체에 관한 연구 : 참돔양식업을 사례로)

  • 송정헌
    • The Journal of Fisheries Business Administration
    • /
    • v.34 no.2
    • /
    • pp.75-90
    • /
    • 2003
  • When we think of fundamental problems of the aquaculture industry, there are several strict conditions, and consequently the aquaculture industry is forced to change. Fish aquaculture has a structural supply surplus in production, aggravation of fishing grounds, stagnant low price due to recent recession, and drastic change of distribution circumstances. It is requested for us to initiate discussion on such issue as “how fish aquaculture establishes its status in the coastal fishery\ulcorner, will fish aquaculture grow in the future\ulcorner, and if so “how it will be restructured\ulcorner” The above issues can be observed in the mariculture of yellow tail, sea scallop and eel. But there have not been studied concerning seabream even though the production is over 30% of the total production of fish aquaculture in resent and it occupied an important status in the fish aquaculture. The objectives of this study is to forecast the future movement of sea bream aquaculture. The first goal of the study is to contribute to managerial and economic studies on the aquaculture industry. The second goal is to identify the factors influencing the competition between production areas and to identify the mechanisms involved. This study will examine the competitive power in individual producing area, its behavior, and its compulsory factors based on case study. Producing areas will be categorized according to following parameters : distance to market and availability of transportation, natural environment, the time of formation of producing areas (leaderㆍfollower), major production items, scale of business and producing areas, degree of organization in production and sales. As a factor in shaping the production area of sea bream aquaculture, natural conditions especially the water temperature is very important. Sea bream shows more active feeding and faster growth in areas located where the water temperature does not go below 13∼14$^{\circ}C$ during the winter. Also fish aquaculture is constrained by the transporting distance. Aquacultured yellowtail is a mass-produced and a mass-distributed item. It is sold a unit of cage and transported by ship. On the other hand, sea bream is sold in small amount in markets and transported by truck; so, the transportation cost is higher than yellow tail. Aquacultured sea bream has different product characteristics due to transport distance. We need to study live fish and fresh fish markets separately. Live fish was the original product form of aquacultured sea bream. Transportation of live fish has more constraints than the transportation of fresh fish. Death rate and distance are highly correlated. In addition, loading capacity of live fish is less than fresh fish. In the case of a 10 ton truck, live fish can only be loaded up to 1.5 tons. But, fresh fish which can be placed in a box can be loaded up to 5 to 6 tons. Because of this characteristics, live fish requires closer location to consumption area than fresh fish. In the consumption markets, the size of fresh fish is mainly 0.8 to 2kg.Live fish usually goes through auction, and quality is graded. Main purchaser comes from many small-sized restaurants, so a relatively small farmer and distributer can sell it. Aquacultured sea bream has been transacted as a fresh fish in GMS ,since 1993 when the price plummeted. Economies of scale works in case of fresh fish. The characteristics of fresh fish is as follows : As a large scale demander, General Merchandise Stores are the main purchasers of sea bream and the size of the fish is around 1.3kg. It mainly goes through negotiation. Aquacultured sea bream has been established as a representative food in General Merchandise Stores. GMS require stable and mass supply, consistent size, and low price. And Distribution of fresh fish is undertook by the large scale distributers, which can satisfy requirements of GMS. The market share in Tokyo Central Wholesale Market shows Mie Pref. is dominating in live fish. And Ehime Pref. is dominating in fresh fish. Ehime Pref. showed remarkable growth in 1990s. At present, the dealings of live fish is decreasing. However, the dealings of fresh fish is increasing in Tokyo Central Wholesale Market. The price of live fish is decreasing more than one of fresh fish. Even though Ehime Pref. has an ideal natural environment for sea bream aquaculture, its entry into sea bream aquaculture was late, because it was located at a further distance to consumers than the competing producing areas. However, Ehime Pref. became the number one producing areas through the sales of fresh fish in the 1990s. The production volume is almost 3 times the production volume of Mie Pref. which is the number two production area. More conversion from yellow tail aquaculture to sea bream aquaculture is taking place in Ehime Pref., because Kagosima Pref. has a better natural environment for yellow tail aquaculture. Transportation is worse than Mie Pref., but this region as a far-flung producing area makes up by increasing the business scale. Ehime Pref. increases the market share for fresh fish by creating demand from GMS. Ehime Pref. has developed market strategies such as a quick return at a small profit, a stable and mass supply and standardization in size. Ehime Pref. increases the market power by the capital of a large scale commission agent. Secondly Mie Pref. is close to markets and composed of small scale farmers. Mie Pref. switched to sea bream aquaculture early, because of the price decrease in aquacultured yellou tail and natural environmental problems. Mie Pref. had not changed until 1993 when the price of the sea bream plummeted. Because it had better natural environment and transportation. Mie Pref. has a suitable water temperature range required for sea bream aquaculture. However, the price of live sea bream continued to decline due to excessive production and economic recession. As a consequence, small scale farmers are faced with a market price below the average production cost in 1993. In such kind of situation, the small-sized and inefficient manager in Mie Pref. was obliged to withdraw from sea bream aquaculture. Kumamoto Pref. is located further from market sites and has an unsuitable nature environmental condition required for sea bream aquaculture. Although Kumamoto Pref. is trying to convert to the puffer fish aquaculture which requires different rearing techniques, aquaculture technique for puffer fish is not established yet.

  • PDF

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Derivation of Digital Music's Ranking Change Through Time Series Clustering (시계열 군집분석을 통한 디지털 음원의 순위 변화 패턴 분류)

  • Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.171-191
    • /
    • 2020
  • This study focused on digital music, which is the most valuable cultural asset in the modern society and occupies a particularly important position in the flow of the Korean Wave. Digital music was collected based on the "Gaon Chart," a well-established music chart in Korea. Through this, the changes in the ranking of the music that entered the chart for 73 weeks were collected. Afterwards, patterns with similar characteristics were derived through time series cluster analysis. Then, a descriptive analysis was performed on the notable features of each pattern. The research process suggested by this study is as follows. First, in the data collection process, time series data was collected to check the ranking change of digital music. Subsequently, in the data processing stage, the collected data was matched with the rankings over time, and the music title and artist name were processed. Each analysis is then sequentially performed in two stages consisting of exploratory analysis and explanatory analysis. First, the data collection period was limited to the period before 'the music bulk buying phenomenon', a reliability issue related to music ranking in Korea. Specifically, it is 73 weeks starting from December 31, 2017 to January 06, 2018 as the first week, and from May 19, 2019 to May 25, 2019. And the analysis targets were limited to digital music released in Korea. In particular, digital music was collected based on the "Gaon Chart", a well-known music chart in Korea. Unlike private music charts that are being serviced in Korea, Gaon Charts are charts approved by government agencies and have basic reliability. Therefore, it can be considered that it has more public confidence than the ranking information provided by other services. The contents of the collected data are as follows. Data on the period and ranking, the name of the music, the name of the artist, the name of the album, the Gaon index, the production company, and the distribution company were collected for the music that entered the top 100 on the music chart within the collection period. Through data collection, 7,300 music, which were included in the top 100 on the music chart, were identified for a total of 73 weeks. On the other hand, in the case of digital music, since the cases included in the music chart for more than two weeks are frequent, the duplication of music is removed through the pre-processing process. For duplicate music, the number and location of the duplicated music were checked through the duplicate check function, and then deleted to form data for analysis. Through this, a list of 742 unique music for analysis among the 7,300-music data in advance was secured. A total of 742 songs were secured through previous data collection and pre-processing. In addition, a total of 16 patterns were derived through time series cluster analysis on the ranking change. Based on the patterns derived after that, two representative patterns were identified: 'Steady Seller' and 'One-Hit Wonder'. Furthermore, the two patterns were subdivided into five patterns in consideration of the survival period of the music and the music ranking. The important characteristics of each pattern are as follows. First, the artist's superstar effect and bandwagon effect were strong in the one-hit wonder-type pattern. Therefore, when consumers choose a digital music, they are strongly influenced by the superstar effect and the bandwagon effect. Second, through the Steady Seller pattern, we confirmed the music that have been chosen by consumers for a very long time. In addition, we checked the patterns of the most selected music through consumer needs. Contrary to popular belief, the steady seller: mid-term pattern, not the one-hit wonder pattern, received the most choices from consumers. Particularly noteworthy is that the 'Climbing the Chart' phenomenon, which is contrary to the existing pattern, was confirmed through the steady-seller pattern. This study focuses on the change in the ranking of music over time, a field that has been relatively alienated centering on digital music. In addition, a new approach to music research was attempted by subdividing the pattern of ranking change rather than predicting the success and ranking of music.

Effects of Nitrogen , Phosphorus and Potassium Application Rates on Oversown Hilly Pasture under Different Levels of Inclination II. Changes on the properties, chemical composition, uptake and recovery of mineral nutrients in mixed grass/clover sward (경사도별 3요소시용 수준이 겉뿌림 산지초지에 미치는 영향 II. 토양특성 , 목초의 무기양분함량 및 3요소 이용율의 변화)

  • 정연규;이종열
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.5 no.3
    • /
    • pp.200-206
    • /
    • 1985
  • This field experiment was undertaken to assess the effects of three levels of inclination ($10^{\circ},\;20^{\circ},\;and\;30^{\circ}$) and four rates of $N-P_2O_5-K_2O$ (0-0-0-, 14-10-10, 28-25-25, and 42-40-40kg/10a) on establishment, yield and quality, and botanical compositions of mixed grass-clover sward. This second part is concerned with the soil chemical properties, concentrations and uptake of mineral nutrients, and percent recovery and efficiency of NPK. The results obtained after a two-year experiment are summarized as follows: 1. The pH, exchangeable Mg and Na, and base saturation in the surface soils were decreased by increasing the grade of inclination, whereas organic matter and available $P_2O_5$ tended to be increased. However, the changes in the Ca content and equivalent ratio of $K\sqrt{Ca+Mg}$ were not significant. The pH, exchangeable Ca and Mg, and base saturation were reduced by increasing the NPK rate, whereas available $P_2O_5$, exchangeable K, and equivalent ratio of $K\sqrt{Ca+Mg}$ tended to be increased. 2. The concentrations of mineral nutrients in grasses and weeds were not significantly affected by increasing the grade of slope in hilly pasture, whereas the concentrations of N, K, and Mg in legume were the lowest with the steep slope, which seemed to be related to the low legume yield. The Mg concentrations of all forage species were below the critical level for good forage growth and likelihood of grass tetany. 3. The increase of NPK rate resulted in the increment of N, K and Na concentrations, and the decrease of Mg and Ca in grasses. The P concentration was increased with P application, but there were no differences in that among the P rates applied. It resulted also in a slight increase of K, and a decrease of Mg in legume, but the contents of N, Ca, and Na were not affected by that. On the other hand, it has not affected the mineral contents in weeds except a somewhat increase of N. The mixed forages showed a increase of N and K contents, a decrease of Ca and Mg, and a slight change in P and Na. 4. The percent recovery of N, P and K by mixed forages were greatly decreased by increasing the grade of inclination and NPK rate. They were high in the order; K>N>P. The efficiency of mixed NPK applications was decreased by that. The efficiency of mixed NPK fertilizers absorbed was slightly decreased by the increased rate of NPK, but it was not affected by the grade of inclination.

  • PDF

A Study on the Historical Values of the Changes of Forest and the Major Old Big Trees in Gyeongbokgung Palace's Back Garden (경복궁 후원 수림의 변화과정 및 주요 노거수군의 역사적 가치규명)

  • Shin, Hyun-Sil
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.40 no.2
    • /
    • pp.1-13
    • /
    • 2022
  • This paper examined the history and development of Gyeongbokgung Palace's back garden based on historical materials and drawings such as Joseon Ilgi(Diaries of Joseon Dynasty), Joseon Wangjo Sillok(the Annals of the Joseon Dynasty), Doseongdaejido(the Great Map of Seoul), Bukgwoldohyeong(Drawing Plan of the Northern Palace), the Bukgung Palace Restoration Plan, Restoration Planning of Gyeongbokgung Palace and the following results were derived. First, it was confirmed that the Back Garden of Gyeongbokgung Palace was famous for its great location since the Goryeo Dynasty, and that it was named Namkyeong at that time and was a place where a shrine was built, and that castles and palaces were already built during the Goryeo Dynasty under the influence of Fengshui-Docham(風水圖讖) and Zhouli·Kaogongji(周禮考工記). Although the back garden of Gyeongbokgung Palace in the early Joseon Dynasty stayed out of the limelight as a back garden for the palace, it has a place value as a living space for the head of the state from King Gojong to the present. Second, in order to clearly identify the boundaries of back garden, through literature such as map of Doseongdo (Map of the Capital), La Coree, Gyeongmudae Area, Japanese Geography Custom Compendium, Korean Photo Album, JoseonGeonchukdoJip(The Illustration Book of Joseon Construction), Urban Planning Survey of Gyeongseong, it was confirmed that the current Blue House area outside Sinmumun Gate was built outside the precincts of Gyeongbokgung Palace. It was found that the area devastated through the Japanese Invasion of Korea in 1592, was used as a space where public corporations were combined through the process of reconstruction during the King Gojong period. In Japanese colonial era, the place value as a back garden of the primary palace was damaged, as the palace buildings of the back garden was relocated or destroyed, but after liberation, it was used as the presidential residence and restored the place value of the ruler. Third, in the back garden of Gyeongbokgung Palace, spatial changes proceeded through the Japanese Invasion and Japanese colonial era. The place with the greatest geographical change was Gyeongnongjae area, where the residence of the Japanese Government-General of Korea was built, and there were frequent changes in the use of the land. On the other hand, the current Gyeongmudae area, the forests next to the small garden, and the forests of Baekak were preserved in the form of traditional forests. To clarify this, 1:1200 floor plan of inner Gyeongmudae residence and satellite images were overlapped based on Sinmumun Gate, and as a result, it was confirmed that the water path originating from Baekak still exists today and the forest area did not change. Fourth, in the areas where the traditional forest landscape was inherited, the functional changes in the topography were little, and major old-age colonies are maintained. The old trees identified in this area were indicator tree species with historical value. Representatively, Pinus densiflora for. multicaulis Uyeki, located in Nokjiwon Garden, is presumed to have been preserved as one of Pinus densiflora for. multicaulis Uyeki planted next to Yongmundang, and has a historicality that has been used as a photo zone at dinners for heads of state and important guests. Lastly, in order to continuously preserve and manage the value of Gyeongbokgung Palace in Blue House, it is urgent to clarify the space value through excavation of historical materials in Japanese colonial era and establish a hierarchy of garden archaeology by era. In addition, the basis for preserving the historical landscape from the Joseon Dynasty to the modern era from Gyeongbokgung Palace should not damage the area of the old giant trees, which has been perpetuated since the past, and a follow-up study is needed to investigate all the forests in Blue House.

A study on Broad Quantification Calibration to various isotopes for Quantitative Analysis and its SUVs assessment in SPECT/CT (SPECT/CT 장비에서 정량분석을 위한 핵종 별 Broad Quantification Calibration 시행 및 SUV 평가를 위한 팬텀 실험에 관한 연구)

  • Hyun Soo, Ko;Jae Min, Choi;Soon Ki, Park
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.26 no.2
    • /
    • pp.20-31
    • /
    • 2022
  • Purpose Broad Quantification Calibration(B.Q.C) is the procedure for Quantitative Analysis to measure Standard Uptake Value(SUV) in SPECT/CT scanner. B.Q.C was performed with Tc-99m, I-123, I-131, Lu-177 respectively and then we acquired the phantom images whether the SUVs were measured accurately. Because there is no standard for SUV test in SPECT, we used ACR Esser PET phantom alternatively. The purpose of this study was to lay the groundwork for Quantitative Analysis with various isotopes in SPECT/CT scanner. Materials and Methods Siemens SPECT/CT Symbia Intevo 16 and Intevo Bold were used for this study. The procedure of B.Q.C has two steps; first is point source Sensitivity Cal. and second is Volume Sensitivity Cal. to calculate Volume Sensitivity Factor(VSF) using cylinder phantom. To verify SUV, we acquired the images with ACR Esser PET phantom and then we measured SUVmean on background and SUVmax on hot vials(25, 16, 12, 8 mm). SPSS was used to analyze the difference in the SUV between Intevo 16 and Intevo Bold by Mann-Whitney test. Results The results of Sensitivity(CPS/MBq) and VSF were in Detector 1, 2 of four isotopes (Intevo 16 D1 sensitivity/D2 sensitivity/VSF and Intevo Bold) 87.7/88.6/1.08, 91.9/91.2/1.07 on Tc-99m, 79.9/81.9/0.98, 89.4/89.4/0.98 on I-123, 124.8/128.9/0.69, 130.9, 126.8/0.71, on I-131, 8.7/8.9/1.02, 9.1/8.9/1.00 on Lu-177 respectively. The results of SUV test with ACR Esser PET phantom were (Intevo 16 BKG SUVmean/25mm SUVmax/16mm/12mm/8mm and Intevo Bold) 1.03/2.95/2.41/1.96/1.84, 1.03/2.91/2.38/1.87/1.82 on Tc-99m, 0.97/2.91/2.33/1.68/1.45, 1.00/2.80/2.23/1.57/1.32 on I-123, 0.96/1.61/1.13/1.02/0.69, 0.94/1.54/1.08/0.98/ 0.66 on I-131, 1.00/6.34/4.67/2.96/2.28, 1.01/6.21/4.49/2.86/2.21 on Lu-177. And there was no statistically significant difference of SUV between Intevo 16 and Intevo Bold(p>0.05). Conclusion Only Qualitative Analysis was possible with gamma camera in the past. On the other hand, it's possible to acquire not only anatomic localization, 3D tomography but also Quantitative Analysis with SUV measurements in SPECT/CT scanner. We could lay the groundwork for Quantitative Analysis with various isotopes; Tc-99m, I-123, I-131, Lu-177 by carrying out B.Q.C and could verify the SUV measurement with ACR phantom. It needs periodic calibration to maintain for precision of Quantitative evaluation. As a result, we can provide Quantitative Analysis on follow up scan with the SPECT/CT exams and evaluate the therapeutic response in theranosis.

An Exploratory Study on Marketing of Financial Services Companies in Korea (한국 금융회사 마케팅 현황에 대한 탐색 연구)

  • Chun, Sung Yong
    • Asia Marketing Journal
    • /
    • v.12 no.2
    • /
    • pp.111-133
    • /
    • 2010
  • Marketing financial services used to be easier. Today, the competition in financial services is fierce. Not only has the competition become more intense, financial services have also changed structurally. In an environment with various customer needs and severe competitions, the marketing in financial services industry is getting more difficult and more important than before. However, there are still not enough studies on financial services marketing in Korea whereas lots of research papers have been published frequently in some international journals. The purpose of this paper is (1)to review the literature on financial services marketing, (2)to investigate current marketing activities based on in-depth interview with financial marketing managers in Korea, and (3)to suggest some implications for future research on the financial services marketing. Financial products are not consumer products. In fact, they are not products at all in the way product marketing is usually described. Nor are they altogether like services. The financial industry operates in a unique way, and its marketing tasks are correspondingly complex. However, the literature review shows that there has been a lack of basic studies which dealt with inherent characteristics of financial services marketing compared to the research on marketing in other industries. Many studies in domestic marketing journals have so far focused only on the general customer behaviors and the special issues in some financial industries. However, for more effective financial services marketing, we have to answer following questions. Is there any difference between financial service marketing and consumer packaged goods marketing? What are the differences between the financial services marketing and other services marketing such as education and health services? Are there different ways of marketing among banks, securities firms, insurance firms, and credit card companies? In other words, we need more detailed research as well as basic studies about the financial services marketing. For example, we need concrete definitions of financial services marketing, bank marketing, securities firm marketing, and etc. It is also required to compare the characteristics of each marketing within the financial services industry. The products sold in each market have different characteristics such as duration and degree of risk-taking. It means that there are sub-categories in financial services marketing. We have to consider them in the future research on the financial services marketing. It is also necessary to study customer decision making process in the financial markets. There have been little research on how customers search and process information, compare alternatives, make final decision, and repeat their choices. Because financial services have some unique characteristics, we need different understandings in the customer behaviors compared to the behaviors in other service markets. And also considering the rapid growth in financial markets and upcoming severe competition between domestic and global financial companies, it is time to start more systematic and detailed research on financial services marketing in Korea. In the second part of this paper, I analyzed the results of in-depth interview with 20 marketing managers of financial services companies in Korea. As a result, I found that the role of marketing departments in Korean financial companies are mainly focused on the short-term activities such as sales support, promotion, and CRM data analysis although the size and history of marketing departments to some extent show a sign of maturity. Most companies established official marketing departments before 2001. Average number of employees in a marketing department is about 58. However, marketing managers in eight companies(40% of the sample) still think that the purpose of marketing is only to support and manage general sales activities. It shows that some companies have sales-oriented concept rather than marketing-oriented concept. I also found three key words which marketing managers think importantly in financial services markets. They are (1)Trust in customer relationship, (2)Brand differentiation, and (3)Rapid response to customer needs. 50% of the sample support that "Trust" is the most important key word in the financial services marketing. It is interesting that 80% of banks and securities companies think that "Trust" is the most important thing, whereas managers in credit card companies consider "Rapid response to customer needs" as the most important key word in their market. In addition, there are different problems recognition of marketing managers depending on the types of financial industries they belong to. For example, in the case of banks and insurance companies, marketing managers consider "a lack of communication with other departments" as the most serious problem. On the other hand, in the case of securities firms, "a lack of utilization of customer data" is the most serious problem. These results imply that there are different important factors for the customer satisfaction depending on the types of financial industries, and managers have to consider them when marketing financial products in more effective ways. For example, It will be necessary for marketing managers to study different important factors which affect customer satisfaction, repeat purchase, degree of risk-taking, and possibility of cross-selling according to the types of financial industries. I also suggested six hypothetical propositions for the future research.

  • PDF