• 제목/요약/키워드: Vehicle

검색결과 20,540건 처리시간 0.055초

영상 및 기온 데이터 기반 배추 생육예측 모형 개발 (Development of Kimchi Cabbage Growth Prediction Models Based on Image and Temperature Data)

  • 강민서;심재상;이혜진;이희주;장윤아;이우문;이상규;위승환
    • 생물환경조절학회지
    • /
    • 제32권4호
    • /
    • pp.366-376
    • /
    • 2023
  • 본 연구는 영상데이터와 환경데이터를 활용하여 배추의 생육을 예측할 수 있는 모형을 개발하기 위하여 수행되었다. 강원도 평창군에 소재한 시험포에 '청명가을' 배추를 7월 11일, 7월 19일, 7월 27일 3차례 정식하여 9월 12일까지 생육, 영상, 환경데이터를 수집하였다. 배추 생육예측 모형에 활용할 핵심인자를 선발하기 위하여 수집한 생육데이터와 기상데이터를 활용해 요소간 상관분석을 수행한 결과 생체중과 GDD, 생체중과 누적일사량의 상관계수가 0.88로 높은 상관계수를 보였으며, 생체중과 초장, 생체중과 피복면적이 각각0.78, 0.79로 유의미한 상관 관계를 보였다. 높은 상관관계를 보인 요소들 중에서 선행문헌을 참고하여 모형개발에 활용할 핵심요소로 영상에서는 피복면적을 환경데이터에서는 생육도일(GDD)을 선정하였다. GDD, 피복면적, 생육데이터를 조합하여 배추의 생체중, 엽수, 엽면적 예측 모형을 개발하였다. 단 요인 모형으로 2차함수, 시그모이드, 로지스틱 모형을 제작하였으며 평가 결과 시그모이드 형태의 예측 모형이 가장 설명력이 좋았다. GDD와 피복면적을 조합한 다요인 생육예측 모형을 개발한 결과 생체중, 엽수, 엽면적의 결정계수가 0.9, 0.95, 0.89으로 단요인 예측모형보다 예측정확도가 개선된것을 확인할 수 있었다. 개발한 모형을 검증하기 위하여 검증시험포의 조사결과로 검증한 결과 관측 값과 예측 값의 결정계수는 0.91이며 RMSE가 134.2g으로 높은 예측 정확도를 보였다. 기존의 생육 관측의 경우 기상데이터로만 예측을 하거나 영상데이터로만 예측하는 경우가 많았는데 이는 현장의 상태를 반영하지 못하거나 배추가 결구 되는 특성을 반영하지 못해 예측 정확도가 낮았다. 두 예측방법을 혼합해 각 관측방법의 약점을 보완해 줌으로써 대한민국에서 수행되고 있는 기간채소 작황예측의 정확도를 높일 수 있을 것으로 기대된다.

클라우드 컴퓨팅 서비스의 도입특성이 조직의 성과기대 및 사용의도에 미치는 영향에 관한 연구: 혁신확산 이론 관점 (A Study on the Effect of the Introduction Characteristics of Cloud Computing Services on the Performance Expectancy and the Intention to Use: From the Perspective of the Innovation Diffusion Theory)

  • 임재수;오재인
    • Asia pacific journal of information systems
    • /
    • 제22권3호
    • /
    • pp.99-124
    • /
    • 2012
  • Our society has long been talking about necessity for innovation. Since companies in particular need to carry out business innovation in their overall processes, they have attempted to apply many innovation factors on sites and become to pay more attention to their innovation. In order to achieve this goal, companies has applied various information technologies (IT) on sites as a means of innovation, and consequently IT have been greatly developed. It is natural for the field of IT to have faced another revolution which is called cloud computing, which is expected to result in innovative changes in software application via the Internet, data storing, the use of devices, and their operations. As a vehicle of innovation, cloud computing is expected to lead the changes and advancement of our society and the business world. Although many scholars have researched on a variety of topics regarding the innovation via IT, few studies have dealt with the issue of could computing as IT. Thus, the purpose of this paper is to set the variables of innovation attributes based on the previous articles as the characteristic variables and clarify how these variables affect "Performance Expectancy" of companies and the intention of using cloud computing. The result from the analysis of data collected in this study is as follows. The study utilized a research model developed on the innovation diffusion theory to identify influences on the adaptation and spreading IT for cloud computing services. Second, this study summarized the characteristics of cloud computing services as a new concept that introduces innovation at its early stage of adaptation for companies. Third, a theoretical model is provided that relates to the future innovation by suggesting variables for innovation characteristics to adopt cloud computing services. Finally, this study identified the factors affecting expectation and the intention to use the cloud computing service for the companies that consider adopting the cloud computing service. As the parameter and dependent variable respectively, the study deploys the independent variables that are aligned with the characteristics of the cloud computing services based on the innovation diffusion model, and utilizes the expectation for performance and Intention to Use based on the UTAUT theory. Independent variables for the research model include Relative Advantage, Complexity, Compatibility, Cost Saving, Trialability, and Observability. In addition, 'Acceptance for Adaptation' is applied as an adjustment variable to verify the influences on the expected performances from the cloud computing service. The validity of the research model was secured by performing factor analysis and reliability analysis. After confirmatory factor analysis is conducted using AMOS 7.0, the 20 hypotheses are verified through the analysis of the structural equation model, accepting 12 hypotheses among 20. For example, Relative Advantage turned out to have the positive effect both on Individual Performance and on Strategic Performance from the verification of hypothesis, while it showed meaningful correlation to affect Intention to Use directly. This indicates that many articles on the diffusion related Relative Advantage as the most important factor to predict the rate to accept innovation. From the viewpoint of the influence on Performance Expectancy among Compatibility and Cost Saving, Compatibility has the positive effect on both Individual Performance and on Strategic Performance, while it showed meaningful correlation with Intention to Use. However, the topic of the cloud computing service has become a strategic issue for adoption in companies, Cost Saving turns out to affect Individual Performance without a significant influence on Intention to Use. This indicates that companies expect practical performances such as time and cost saving and financial improvements through the adoption of the cloud computing service in the environment of the budget squeezing from the global economic crisis from 2008. Likewise, this positively affects the strategic performance in companies. In terms of effects, Trialability is proved to give no effects on Performance Expectancy. This indicates that the participants of the survey are willing to afford the risk from the high uncertainty caused by innovation, because they positively pursue information about new ideas as innovators and early adopter. In addition, they believe it is unnecessary to test the cloud computing service before the adoption, because there are various types of the cloud computing service. However, Observability positively affected both Individual Performance and Strategic Performance. It also showed meaningful correlation with Intention to Use. From the analysis of the direct effects on Intention to Use by innovative characteristics for the cloud computing service except the parameters, the innovative characteristics for the cloud computing service showed the positive influence on Relative Advantage, Compatibility and Observability while Complexity, Cost saving and the likelihood for the attempt did not affect Intention to Use. While the practical verification that was believed to be the most important factor on Performance Expectancy by characteristics for cloud computing service, Relative Advantage, Compatibility and Observability showed significant correlation with the various causes and effect analysis. Cost Saving showed a significant relation with Strategic Performance in companies, which indicates that the cost to build and operate IT is the burden of the management. Thus, the cloud computing service reflected the expectation as an alternative to reduce the investment and operational cost for IT infrastructure due to the recent economic crisis. The cloud computing service is not pervasive in the business world, but it is rapidly spreading all over the world, because of its inherited merits and benefits. Moreover, results of this research regarding the diffusion innovation are more or less different from those of the existing articles. This seems to be caused by the fact that the cloud computing service has a strong innovative factor that results in a new paradigm shift while most IT that are based on the theory of innovation diffusion are limited to companies and organizations. In addition, the participants in this study are believed to play an important role as innovators and early adapters to introduce the cloud computing service and to have competency to afford higher uncertainty for innovation. In conclusion, the introduction of the cloud computing service is a critical issue in the business world.

  • PDF

사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구 (An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining)

  • 이형일;김종우
    • 지능정보연구
    • /
    • 제26권1호
    • /
    • pp.47-73
    • /
    • 2020
  • KTX 차량은 수많은 기계, 전기 장치 및 부품들로 구성되어 있는 하나의 시스템으로 차량의 유지보수에는 상당히 많은 전문성과 유지보수 작업자들의 경험을 필요로 한다. 차량 고장발생 시 유지보수자의 지식과 경험에 따라 문제 해결의 시간과 작업의 질적 차이가 발생하며 그에 따른 차량의 가용율이 달라진다. 일반적으로 문제해결은 고장 매뉴얼을 기반으로 하지만 경험이 많고 능숙한 전문가의 경우는 이와 더불어 개인의 노하우를 접목하여 신속하게 진단하고 조치를 취한다. 이러한 지식은 암묵지 형태로 존재하기 때문에 후임자에게 완전히 전수되기 어려우며, 이를 위해 사례기반의 철도차량 전문가시스템을 개발하여 데이터화된 지식으로 바꾸려고 하는 연구들이 있어왔다. 하지만, 간선에 가장 많이 투입되고 있는 KTX 차량에 대한 연구나 텍스트의 특징을 추출하여 유사사례를 검색하는 시스템 개발은 아직 미비하다. 따라서, 본 연구에서는 이러한 차량 유지보수 전문가들의 노하우를 통해 수행된 고장들에 대한 진단과 조치 이력을 문제 해결의 사례로 활용하여 새롭게 발생하는 고장에 대한 조치가이드를 제공하는 지능형 조치지원시스템을 제안하고자 한다. 이를 위하여, 2015년부터 2017년동안 생성된 차량고장 데이터를 수집하여 사례베이스를 구축하였고, 차원축소 기법인 비음수 행렬 인수분해(NMF), 잠재의미분석(LSA), Doc2Vec을 통해 고장의 특징을 추출하여 벡터 간의 코사인 거리를 측정하는 방식으로 유사 사례를 검색하였으며, 위의 알고리즘에 의해 제안된 조치내역들 간 성능을 비교하였다. 분석결과, 고장 내역의 키워드가 적은 경우의 유사 사례 검색과 조치 제안은 코사인 유사도를 직접 적용하는 경우에도 좋은 성능을 낸다는 것을 알 수 있었고 차원 축소 기법들의 성능 비교를 통해 문맥적 의미를 보존하는 차원 축소 방식 중 Doc2Vec을 적용하는 것이 가장 좋은 성능을 나타낸다는 것을 알 수 있었다. 텍스트 마이닝 기술은 여러 분야에서 활용을 위한 연구들이 이루어지고 있는 추세이나, 본 연구에서 활용하고자 하는 분야처럼 전문적인 용어들이 다수이고 데이터에 대한 접근이 제한적인 환경에서 이러한 텍스트 데이터를 활용한 연구는 아직 부족한 실정이다. 본 연구는 이러한 관점에서 키워드 기반의 사례 검색을 보완하고자 텍스트 마이닝 기법을 접목하여 고장의 특징을 추출하는 방식으로 사례를 검색해 조치를 제안하는 지능형 진단시스템을 제시하였다는 데에 의의가 있다. 이를 통해 현장에서 바로 사용 가능한 진단시스템을 단계적으로 개발하는데 기초자료로써 시사점을 제공할 수 있을 것으로 기대한다.

랫드에서 TCDD 투여에 의해 유도된 생체독성의 고려홍삼 추출물에 의한 억제 효과 (Protective Effects of Korean Panax Ginseng Extracts against TCDD-induced Toxicities in Rat)

  • 최수진;손형옥;신한재;현학철;이동욱;송용범;이수현;강동호;임학섭;이철원;문자영
    • Journal of Ginseng Research
    • /
    • 제32권4호
    • /
    • pp.382-389
    • /
    • 2008
  • TCDD가 실험동물에 노출되었을 때 유발되는 생체독성을 예방 또는 억제할 수 있는 고려홍삼 추출물의 효과를 탐색하였다. 이를 위하여 TCDD($25\;{\mu}g/kg$ bw, 1회 투여)와 홍삼추출물(100 mg/kg bw, 격일투여)을 각각 단독 또는 병행 복강 투여한 다음 32일 동안에 체중과 각 장기들의 무게의 변화, 뇨 분석, 혈액학적 및 혈액화학적 변화를 관찰하였다. TCDD의 단독투여에 의하여 체중의 증가정도가 정상군 또는 홍삼단독 투여군의 체중증가율에 비하여 상당히 감소하였다. TCDD와 홍삼추출물을 병행 투여한 흰쥐에서의 체중증가율은 TCDD 단독투여에 의해 감소된 체중증가율을 다소 회복 시키는 결과를 나타내었다. TCDD를 단독 투여한 실험군에서는 간의 무게가 TCDD 투여 후 2, 5, 및 16일째에 대조군에 비하여 특이적으로 증가하는 특징을 보였으나, 홍삼추출물을 단독 투여한 실험군에서의 간의 무게는 대조군의 간의 무게에 비하여 다소 감소하였다. TCDD와 홍삼추출물이 병행 투여된 흰쥐에서의 간의 무게의 증가정도는 TCDD을 단독투여한 흰쥐 간의 무게에 비하여 약간 감소하였다. TCDD를 단독투여한 흰쥐에서의 신장(Kidney)의 무게는 대조군에 비하여 다소 감소하였으나 통계적으로는 유의하지 않았다. TCDD와 홍삼추출물을 병행투여한 흰쥐에서의 신장의 무게의 변화는 TCDD를 단독 투여한 흰쥐에서의 결과와 차이가 없었다. 홍삼추출물 단독 투여군에서의 신장의 무게는 실험초기(1-2일)에 대조군에 비하여 다소 감소하는 경향을 보였으나 5일 째부터는 대조군과 같은 수준으로 회복되었다. Spleen은 TCDD의 단독투여에 의해 2-3일 이내에 일시적인 감소가 있었으나 노출기간이 증가할수록 대조군 수준으로 회복되었다. 홍삼추출물단독 투여군과 TCDD와 홍삼추출물의 병행투여군에서 spleen의 무게는 대조군에 비하여 투여 후 16일 이후에는 유의적으로 증가하였다. TCDD와 홍삼추출물의 단독 또는 병행 투여군에서의 뇌의 무게는 유의적인 변화를 보이지 않았다. 실험동물 뇨에서의 specific gravity는 대조군에서 주령에 상관없이 대체적으로 1.030 이상의 수준을 유지하였으나 홍삼추출물을 단독 투여한 흰쥐에서는 투여 후 14일부터 specific gravity가 1.02 수준으로 낮아지는 경향이 나타났다. TCDD 단독 투여군에서는 투여 초기에 specific gravity가 1.02 수준으로 감소하는 경향이 있었으나 홍삼추출물을 병행투여했을때 1.02 수준으로 감소하는 경향이 14일 이후에 나타났다. 실험동물 뇨에서의 Total protein 함량은 대조군에서 전체 실험기간 동안에 $100\;{\mu}g/dL$ 수준을 유지하였으나, TCDD 단독 투여군과 TCDD와 홍삼추출물의 병행 투여군에서는 $300\;{\mu}g/dL$ 이상의 함량을 나타내는 개체수가 증가하는 현상을 보였다. 한편, 홍삼추출물 단독 투여군에서는 대조군에서와 비슷한 Total protein 함량의 수준을 나타내었다. 뇨에서 ketone body의 함량은 대조군에서 주령의 증가에 따라 높아지는 경향을 나타내었으나 실험군 간의 차이는 나타나지 않았다. Glucose, ketone, bilirubin, Occult blood, nitrite 및 urobilinogen의 함량은 모든 실험군에서 거의 유사하게 나타났으며, pH 값은 주령의 증가에 따라 높아지는 경향이 특징적이었으나 실험군간의 차이는 나타나지 않았다. 혈액화학적 검사결과 TCDD의 단독투여에 의한 AST는 대조군에 비하여 전 실험기간에 걸쳐서 전반적으로 높게 나타났으며, 특히 32일 실험군에서 가장 높은 AST 값을 나타내었다. 홍삼추출물의 단독 투여에 의한 AST는 TCDD의 단독 투여군과는 대조적으로 오히려 노출기간이 경과할수록 감소하였다. 그리고 TCDD의 투여에 의해 증가된 AST는 홍삼추출물을 병행투여 한 지 16일부터 정상 수준으로 회복되었다. TCDD를 단독 투여한 흰쥐 혈청 ALT의 활성은 16일 까지는 대조군의 ALT 활성과 비슷한 수준이었으나 32일 째에는 대조군에 비하여 상당히 증가하였다. 이에 비하여 TCDD와 홍삼추출물을 병행 투여한 실험군에서는 16일군과 32일군에서 ALT의 활성이 급격히 감소하여 대조군의 ALT 활성보다 낮게 나타났다. 홍삼을 단독 투여한 실험군에서의 ALT 활성은 전 실험기간동안에 ALT 활성에 영향을 주지 않았다.

가설품패시인(假设品牌是人), 혹통과고사포장장품패의인화(或通过故事包装将品牌拟人化) (If This Brand Were a Person, or Anthropomorphism of Brands Through Packaging Stories)

  • Kniazeva, Maria;Belk, Russell W.
    • 마케팅과학연구
    • /
    • 제20권3호
    • /
    • pp.231-238
    • /
    • 2010
  • 本研究的焦点是品牌的拟人化. 品牌拟人化被定义为将品牌看作是人类. 具体来说, 本研究的目标是理解如何将品牌拟人化的方法. 通过分析消费者对食品包装上的故事的阅读, 我们试图展示行销者和消费者如何将一系列品牌拟人化并创造意义. 我们的研究问题是一个品牌对不同的消费者具有多个或单一意义, 联想, 个性的可能性. 我们首先强调了本研究在理论和实践方面的重要性, 解释了为什么我们关注作为品牌意义传递工具的包装. 然后我们阐述了我们量性研究方法, 讨论了结果. 最后总结了管理方面的启示和对未来研究的建议. 本研究先让消费者直接阅读品牌意义传递的工具然后让这些消费者口头自由表达他们所感受到的意义. 具体来说, 为了获得有关感知意义的数据, 我们要求参与者去阅读选择的品牌食品包装上的非营养的故事. 包装在消费者研究方面还没有得到足够的关注(Hine, 1995). 直到现在, 研究还是仅关注包装的实用功能并形成了探索营养信息的影响的研究主体. (例如Lourei ro, McCluskey and Mittelhammer, 2002; Mazis and Raymond, 1997; Nayga, Lipinski and Savur, 1998; Wansik, 2003). 一个例外是最近的研究, 将注意力转向非营养信息的包装说明, 并视其为文化产品和将品牌神话的工具(Kniazeva and Belk, 2007). 下一步就是探索这些神话活动如何影响品牌个性感知以及这些感知如何与消费者相关. 这些都是本研究所要强调的. 我们用深度访谈来帮助消除量性研究的局限性. 我们的便利样本的构成具有人口统计和消费心态学的多样化以达到获得消费者对包装故事的不同的感知. 我们的参与者是美国的中产居民, 并没有表现出Thompson(2004)所描述的 "文化创造者" 的极端生活方式. 九名参与者被采访关于他们食品消费偏好和行为的问题. 他们被要求看看12个展示的食品产品包装并阅读包装上的文字信息. 之后, 我们继续进行关注消费者对阅读材料的解释的问题. (Scott and Batra, 2003). 平均来看, 每个参与者感知4-5个包装. 我们的深度访谈是一对一的并长达半个小时. 采访内容被录音下来并转录, 最后有140页的文字. 产品赖在位于美国西海岸的当地食品杂货店, 这些产品代表了食品产品类别的基本范围, 包括零食, 罐装食品, 麦片, 婴儿食品和茶. 我们使用Strauss和Corbin (1998)提出的发展扎根理论的步骤来分析数据. 结果表明, 我们的研究不支持先前的研究所假设的一个品牌/一个个性的概念. 因此我们展示了在消费者看来多个品牌个性可以在同一品牌身上很好的共存, 尽管行销者试图创造更多单一的品牌个性. 我们延伸了Fournier's (1998) 的假设, 某人的人生计划可以形成与品牌关系的强度和本质. 我们发现这些人生计划也影响感知的品牌拟人化和意义. Fournier提出了把消费者人生主题(Mick和Buhl, 1992)和拟人化产品的相关作用联系在一起的概念框架. 我们发现消费者人生计划形成了把品牌拟人化和品牌与消费者现有的关注相关联的方式. 我们通过参与者发现了两种品牌拟人化的方法. 第一种, 品牌个性通过感知的人口统计, 消费心态学和社会个性所创造. 第二, 第二, 在我们的研究还涉及到品牌的消费者所存在的问题与消费者的个性被混合, 以连接到他们(品牌为朋友, 家庭成员, 隔壁邻居)或远离自己的品牌个性和疏远他们(品牌作为二手车推销员, "一群高管".) 通过关注食品产品包装, 我们阐明了非常具体的, 被广泛使用, 但很少深入研究的营销传播工具: 品牌故事. 近期的研究已经视包装为神话制造者. 对行销者来说要创作出和产品及消费它们的消费者相连的文字故事的挑战越来越大, 并建议 "为创造需求的消费者神话的构成材料的多样化是后现代消费者可论证的需求"(Kniazeva和Belk, 2007). 作为叙述故事的的工具, 食品包装可以食品包装可以用理性和感性的方式, 为消费者提供无论是 "讲座" 或 "戏剧"(Randazzo, 2006), 神话(Kniazeva and Belk, 2007; Holt, 2004; Thompson, 2004), 或意义(McCracken, 2005) 作为他们拟人化产品的构成材料. 孕育工艺品牌个性掌握在作家/营销人员手中, 在读者/消费者心目中. 这些消费者会赋予品牌有意义的脸谱.

다년도 분광 데이터를 이용한 콩의 생체중, 엽면적 지수 추정 (Estimation of Fresh Weight and Leaf Area Index of Soybean (Glycine max) Using Multi-year Spectral Data)

  • 장시형;유찬석;강예성;박준우;김태양;강경석;박민준;백현찬;박유현;강동우;쩌우쿤옌;김민철;권연주;한승아;전태환
    • 한국농림기상학회지
    • /
    • 제23권4호
    • /
    • pp.329-339
    • /
    • 2021
  • 콩은 논 대표적인 밭작물로써 온도, 수분, 토양과 같은 환경 조건에 민감하기 때문에 재배 시 포장 관리가 매우 중요하다. 작물 상태를 비파괴적, 비접촉적 방법으로 측정할 수 있는 분광 기술을 활용한다면 작황 예측, 작물 스트레스 및 병충해 판별 등 생육 진단 및 처방을 통해 품질과 수확량을 높일 수 있다. 본 연구에서는 회전익 무인기에 탑재된 다중분광 센서를 이용하여 시험 포장에서 콩 생육 추정 모델 개발하고 재현성을 확인하기 위해 농가 포장에 검증을 수행하였다. 분광 데이터로 산출된 정규화 식생지수(NDVI, GNDVI), 단순비 식생지수(RRVI, GRVI)와 콩 생육 데이터(생체중, LAI)를 선형회귀분석을 실시하여 모델을 개발하였으며 괴산에 위치한 농가포장에서 검증을 실시하였다. 그 결과 생체중의 경우 정규화 식생지수를 이용 시 포화되기 때문에 단순비 식생지수 GRVI를 이용한 모델의 성능이 가장 높았다(R2=0.74, RMSE=246 g/m2, RE=34.2%). 괴산 농가 포장에 생체중 모델 검증 결과 RMSE=392 g/m2, RE=32%로 나타났으며 작부 체계별 나누어 검증 결과 단작 포장과 이모작 포장 생체중 모델은 RMSE=315 g/m2, RE=26% 및 RMSE=381 g/m2, RE=31%로 나타났다. 작부 체계별 포장과 적산온도가 유사한 연도별 시험 포장(2018+2020년, 2019년)을 나누어 생체중 모델 개발한 결과 단년도(2019년)의 성능이 높게 나타났다. 작부 체계별 적산온도가 유사한 검증과 기존 검증 간 비교 결과 단작 포장은 RMSE 및 RE를 기준으로 각각 29.1%와 34.3%로 개선되었으나 이모작 포장은 -19.6%, -31.3%로 저하되었다. 적산온도 이외의 환경 요인, 분광 및 생육 데이터 추가 시 다양한 환경 조건에서 재배되는 콩 생육을 추정 가능할 것으로 판단된다.

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • 유통과학연구
    • /
    • 제8권3호
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

항공기(航空機) 사고조사제도(事故調査制度)에 관한 연구(硏究) (A Study on the System of Aircraft Investigation)

  • 김두환
    • 항공우주정책ㆍ법학회지
    • /
    • 제9권
    • /
    • pp.85-143
    • /
    • 1997
  • The main purpose of the investigation of an accident caused by aircraft is to be prevented the sudden and casual accidents caused by wilful misconduct and fault from pilots, air traffic controllers, hijack, trouble of engine and machinery of aircraft, turbulence during the bad weather, collision between birds and aircraft, near miss flight by aircrafts etc. It is not the purpose of this activity to apportion blame or liability for offender of aircraft accidents. Accidents to aircraft, especially those involving the general public and their property, are a matter of great concern to the aviation community. The system of international regulation exists to improve safety and minimize, as far as possible, the risk of accidents but when they do occur there is a web of systems and procedures to investigate and respond to them. I would like to trace the general line of regulation from an international source in the Chicago Convention of 1944. Article 26 of the Convention lays down the basic principle for the investigation of the aircraft accident. Where there has been an accident to an aircraft of a contracting state which occurs in the territory of another contracting state and which involves death or serious injury or indicates serious technical defect in the aircraft or air navigation facilities, the state in which the accident occurs must institute an inquiry into the circumstances of the accident. That inquiry will be in accordance, in so far as its law permits, with the procedure which may be recommended from time to time by the International Civil Aviation Organization ICAO). There are very general provisions but they state two essential principles: first, in certain circumstances there must be an investigation, and second, who is to be responsible for undertaking that investigation. The latter is an important point to establish otherwise there could be at least two states claiming jurisdiction on the inquiry. The Chicago Convention also provides that the state where the aircraft is registered is to be given the opportunity to appoint observers to be present at the inquiry and the state holding the inquiry must communicate the report and findings in the matter to that other state. It is worth noting that the Chicago Convention (Article 25) also makes provision for assisting aircraft in distress. Each contracting state undertakes to provide such measures of assistance to aircraft in distress in its territory as it may find practicable and to permit (subject to control by its own authorities) the owner of the aircraft or authorities of the state in which the aircraft is registered, to provide such measures of assistance as may be necessitated by circumstances. Significantly, the undertaking can only be given by contracting state but the duty to provide assistance is not limited to aircraft registered in another contracting state, but presumably any aircraft in distress in the territory of the contracting state. Finally, the Convention envisages further regulations (normally to be produced under the auspices of ICAO). In this case the Convention provides that each contracting state, when undertaking a search for missing aircraft, will collaborate in co-ordinated measures which may be recommended from time to time pursuant to the Convention. Since 1944 further international regulations relating to safety and investigation of accidents have been made, both pursuant to Chicago Convention and, in particular, through the vehicle of the ICAO which has, for example, set up an accident and reporting system. By requiring the reporting of certain accidents and incidents it is building up an information service for the benefit of member states. However, Chicago Convention provides that each contracting state undertakes collaborate in securing the highest practicable degree of uniformity in regulations, standards, procedures and organization in relation to aircraft, personnel, airways and auxiliary services in all matters in which such uniformity will facilitate and improve air navigation. To this end, ICAO is to adopt and amend from time to time, as may be necessary, international standards and recommended practices and procedures dealing with, among other things, aircraft in distress and investigation of accidents. Standards and Recommended Practices for Aircraft Accident Injuries were first adopted by the ICAO Council on 11 April 1951 pursuant to Article 37 of the Chicago Convention on International Civil Aviation and were designated as Annex 13 to the Convention. The Standards Recommended Practices were based on Recommendations of the Accident Investigation Division at its first Session in February 1946 which were further developed at the Second Session of the Division in February 1947. The 2nd Edition (1966), 3rd Edition, (1973), 4th Edition (1976), 5th Edition (1979), 6th Edition (1981), 7th Edition (1988), 8th Edition (1992) of the Annex 13 (Aircraft Accident and Incident Investigation) of the Chicago Convention was amended eight times by the ICAO Council since 1966. Annex 13 sets out in detail the international standards and recommended practices to be adopted by contracting states in dealing with a serious accident to an aircraft of a contracting state occurring in the territory of another contracting state, known as the state of occurrence. It provides, principally, that the state in which the aircraft is registered is to be given the opportunity to appoint an accredited representative to be present at the inquiry conducted by the state in which the serious aircraft accident occurs. Article 26 of the Chicago Convention does not indicate what the accredited representative is to do but Annex 13 amplifies his rights and duties. In particular, the accredited representative participates in the inquiry by visiting the scene of the accident, examining the wreckage, questioning witnesses, having full access to all relevant evidence, receiving copies of all pertinent documents and making submissions in respect of the various elements of the inquiry. The main shortcomings of the present system for aircraft accident investigation are that some contracting sates are not applying Annex 13 within its express terms, although they are contracting states. Further, and much more important in practice, there are many countries which apply the letter of Annex 13 in such a way as to sterilise its spirit. This appears to be due to a number of causes often found in combination. Firstly, the requirements of the local law and of the local procedures are interpreted and applied so as preclude a more efficient investigation under Annex 13 in favour of a legalistic and sterile interpretation of its terms. Sometimes this results from a distrust of the motives of persons and bodies wishing to participate or from commercial or related to matters of liability and bodies. These may be political, commercial or related to matters of liability and insurance. Secondly, there is said to be a conscious desire to conduct the investigation in some contracting states in such a way as to absolve from any possibility of blame the authorities or nationals, whether manufacturers, operators or air traffic controllers, of the country in which the inquiry is held. The EEC has also had an input into accidents and investigations. In particular, a directive was issued in December 1980 encouraging the uniformity of standards within the EEC by means of joint co-operation of accident investigation. The sharing of and assisting with technical facilities and information was considered an important means of achieving these goals. It has since been proposed that a European accident investigation committee should be set up by the EEC (Council Directive 80/1266 of 1 December 1980). After I would like to introduce the summary of the legislation examples and system for aircraft accidents investigation of the United States, the United Kingdom, Canada, Germany, The Netherlands, Sweden, Swiss, New Zealand and Japan, and I am going to mention the present system, regulations and aviation act for the aircraft accident investigation in Korea. Furthermore I would like to point out the shortcomings of the present system and regulations and aviation act for the aircraft accident investigation and then I will suggest my personal opinion on the new and dramatic innovation on the system for aircraft accident investigation in Korea. I propose that it is necessary and desirable for us to make a new legislation or to revise the existing aviation act in order to establish the standing and independent Committee of Aircraft Accident Investigation under the Korean Government.

  • PDF

신차와 중고차간 프로모션의 상호작용에 대한 연구 (A Study on Interactions of Competitive Promotions Between the New and Used Cars)

  • 장광필
    • Asia Marketing Journal
    • /
    • 제14권1호
    • /
    • pp.83-98
    • /
    • 2012
  • 신차와 중고차가 함께 경쟁하는 시장에서 신차의 경쟁만을 모형화한다면 가격이나 기타 프로모션 탄력성의 추정이 왜곡될 수 있다. 그러나 자동차 시장을 연구대상으로 한 선행연구의 대부분이 신차 시장의 경쟁에만 관심을 기울였던 바, 합리적인 가격결정이나 프로모션 기획에 도움을 주기에 미흡한 점이 있었다. 본 연구는 신차의 가격결정 및 프로모션 기획이 향후 중고차 시장을 통해 리바운드되어 신차 매출에 다시 영향을 미친다는 점을 반영하여 모형을 설정하였다. 즉, 서로 다른 신차간의 (혹은 서로 다른 중고차간의) 교차탄력성보다, 동일 모델의 신차와 중고차간의 교차탄력성이 높다는 가정하에 모형을 설정하였다. 방법론적으로는 네스티드 로짓(Nested Logit) 모형을 설정하여 소비자의 자동차 선택은 단계적으로 이루어진다고 가정하였다. 즉, 1단계에서 자동차 모델을 선택하고, 모델이 정해지면 2단계에서 신차와 중고차 중 선택하는 구조를 가정하였다 실증분석은 미국 전역에서 2009년 1월부터 2009년 6월까지 판매된 모든 컴팩트 카 모델 중에서 시장점유율 상위 9개 모델의 신차와 중고차를 대상으로 하였다. 실증분석을 통하여 비교 대상 모형보다 제안된 모형이 모형 적합도 측면에서 우월하고 예측타당성도 높다는 것을 보여주었다. 제안된 모형으로 부터 추정된 모수를 사용하여 몇 가지 시나리오를 상정하여 시뮬레이션을 실시한 결과, 신차(중고차)가 점유율을 높이고자 리베이트를 실시할 경우 중고차(신차)는 현재의 시장점유율을 유지하기 위해 대응 가격할인을 실시하게 되는데 할인 폭은 반대의 경우에 비해 높다는(낮다는)점을 확인하였다. 또한 시뮬레이션 결과가 시사하는 바는 신차와 중고차가 함께 경쟁하는 시장에서 IIA(Independence of Irrelevant Alternatives)모형을 적용할 경우 동일모델의 신차와 중고차간의 교차 탄력성을 과소평가하게 되어 현상유지를 위한 가격할인을 실시할 경우 적정한 수준이하로 하게 된다는 것이다.

  • PDF

한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발 (DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA)

  • 박만배
    • 대한교통학회:학술대회논문집
    • /
    • 대한교통학회 1995년도 제27회 학술발표회
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF