• 제목/요약/키워드: Size of product

검색결과 1,932건 처리시간 0.036초

Sm(CO.688-xFe.242Cu.07Zr x)7.404소결자석의 자기적 특성에 미치는 Zr의 영향 (Effect of Magnetic Properties on the Zr contents of Sm(CO.688-xFe.242Cu.07Zr x)7.404 Sintered Magnets)

  • 정우상;김윤배;정원용
    • 한국자기학회지
    • /
    • 제12권5호
    • /
    • pp.189-194
    • /
    • 2002
  • Sm-Co소결자석의 자기적 특성에 미치는 Zr의 영향을 연구하기 위해 Zr함량별로 잉곳을 제조하여 용체화처리 및 시효 조건에 따른 소결자석의 미세구조와 자기적 특성을 조사하였다. Sm(C $O_{.688-x}$F $e_{.242}$C $u_{.07}$Z $r_{x}$)$_{7.404}$ (0.013$\leq$x$\leq$0.026) 합금의 주조조직은 x가 증가함에 따라 공정조직의 분율이 작아지고 공정 영역의 크기가 감소하였다. 반면, 수지상 조직은 x=0.022합금이 가장 미세하였다. Sm(C $O_{.688-x}$F $e_{.242}$C $u_{.07}$Z $r_{x}$)$_{7.404}$ 소결자석은 Sm $Co_{5}$상과 S $m_2$C $O_{17}$상으로 이루어진 셀 구조를 형성하며 셀 경계상인 Sm $Co_{5}$상의 두께는 20nm이고 120$^{\circ}$의 각을 갖고 형성되었다. 또한, Zr 함량이 증가함에 따라 셀 크기가 감소하였다. 그러나 x=0.026 합금은 셀 경계가 분명하게 정의되지 않고 셀 형태가 x=0.022합금에 비해 불규칙하였다. 이러한 미세조직 차이로 인해 x=0.022 합금의 자기적 특성이 가장 높았다. Sm(C $O_{.688-x}$F $e_{.242}$C $u_{.07}$Z $r_{x}$)$_{7.404}$ 합금의 최적 용체화처리온도는 117$0^{\circ}C$였고, 보자력 향상을 위한 2단시효 공정중 최적의 1단 시효 온도는 85$0^{\circ}C$였다.

식품분말 진동선별기 개선을 위한 구조물 효율 분석 (Efficiency Analysis of Spiral Structured Twist Screen)

  • 박인순;나은수;장동순;백영수
    • 산업식품공학
    • /
    • 제14권2호
    • /
    • pp.85-91
    • /
    • 2010
  • 식품 산업 가공 공정에 다양하게 적용되는 진동 선별기(twist screen)의 구조를 변경하고 그 효율을 분석하기 위하여 위와 같은 실험을 실시하였다. 시료 공급 시 스크린 외곽의 프레임을 따라 선별되지 못하고 빠지는 입자를 방지하기 위하여 스크린 프레임에 dam을 설치하였으며, 스크린 중앙에서 공급되는 시료가 진동에 의하여 중앙에서 외곽으로 빠르게 이동하며 입자층을 이루어 선별 효율이 저하되는 것을 방지하기 위하여 스크린에 나선형 구조물을 설치하였다. 진동 선별기는 직경 1,200 mm와 직경 1,500 mm를 사용하였으며 스크린은 mesh 24이고 선경 0.12 mm인 표준망을 적용하였고 진동 모터는 60 Hz로 운전하였으며 자기진동 이송기는 주파수 게이지 8과 10으로 각각 실험한 결과 다음과 같은 결론을 도출하였다. 진동 선별기 직경 1,200 mm인 경우 일반형은 처리 용량 24 kg/hr, 처리 분율은 4.72%이었고 스크린 프레임에 dam을 설치하고 스크린에 나선형 구조물을 설치한 경우에는 처리 용량이 22.8 kg/hr, 처리 분율 8.05%이었다. 구조물의 설치로 처리 용량은 다소 감소하였으나 처리 분율은 1.7배 증가하여 선별 효율이 상당히 높아졌다는 것을 알 수 있었다. 진동 선별기 직경 1,500 mm의 경우 일반형의 처리 용량은 43.32 kg/hr으로 직경 1,200 mm인 경우와 비교하여 처리용량은 1.8배 증가하였으나 처리 분율은 2.37%로 처리용량에 비하여 낮게 나타났다. 처리 1의 입도를 분석한 결과 스크린에 의하여 선별될 세립의 입자가 비선별 처리물에서 발견되고 있다. 스크린의 직경이 넓어져서 처리 용량이 증가하였으나 공급 속도를 높여 공급량이 많아지므로 입자들의 망 접촉 시간이 감소하고 입자의 층 현상이 심화되어 처리 효율이 감소하고 세립의 입자 선별이 완벽하게 이루어지지 않았다는 것을 알 수 있었다. 선별기 직경 1,500 mm에 screen frame dam을 설치한 경우 처리 용량 43.14 kg/hr로 일반형과 비슷하며 처리 분율은 3.25%로 매우 증가되었다. 입자들의 층현상이 심화된 ${\emptyset}$1500에서 스크린의 프레임에 dam을 설치함으로써 스크린 프레임으로 빠져나가 망에 체류 시간이 단축되는 것이 방지되며 프레임을 따라 입자층이 형성되어 스크린과 접촉이 불가능하였던 입자들이 dam에 의하여 스크린 안쪽에서 이동되므로 거의 동일한 처리 용량에서도 처리 효율 증가가 뚜렷하게 구분되었다고 할 수 있다. Screen frame dam과 나선형 구조물을 모두 설치한 경우 처리 용량은 39.04 kg/hr로 다소 감소되었으나 처리 분율은 6.77%로 직경 1,500 mm의 진동 스크린에서 일반형 구조시 보다 3배, frame dam만 설치된 경우보다 2배의 증가를 보였다. 비선별 처리물인 처리 1의 시료를 입도분석한 결과 선별 처리되지 못한 세립의 입자를 전혀 발견할 수 없었다. 이와 같은 결과는 frame dam을 설치하여도 입자들이 빠르게 휩쓸려 스크린의 외곽으로 이동하여 망과 접촉 시간이 단축되며 층을 이루어 입자간의 간섭에 의해 선별 효과가 감소되는 것을 나선형 구조물에 의하여 방지함으로써 체류시간을 증가시키고 입자층을 분산시켜 선별 효율을 증가시킬 수 있었다. 나선형 구조물 설치시 처리 용량은 다소 감소되는 경향을 보이나 처리 분율이 3배까지 차이를 나타나며 이는 공급 속도가 빠를수록 처리 용량이 많을수록 영향이 크다는 것을 알 수 있었다.

가정간편식 밥류의 유형별 1회 제공 포장량 당 에너지 및 영양성분 함량 평가 (Energy and nutrition evaluation per single serving package for each type of home meal replacement rice)

  • 최인영;연지영;김미현
    • Journal of Nutrition and Health
    • /
    • 제55권4호
    • /
    • pp.476-491
    • /
    • 2022
  • 본 연구에서는 가정간편식 (HMR)으로 시판되는 밥류의 1회 제공 포장량 실태를 파악하고, 1회 제공 포장량 당 영양표시에 기반으로 한 에너지 및 영양성분 함량 등에 대한 영양평가를 실시하여, 시판 HMR 밥류의 영양표시 정보 활용의 중요성과 가공식품 밥류의 1회 제공 포장량 설정의 기초자료를 제공하고자 하였다. 시장조사는 식품산업통계에서 HMR 밥류의 매출액이 많은 상위브랜드를 포함하여, 인터넷, 편의점, 슈퍼, 대형마트에서 판매되는 제품을 대상으로 2021년 2월부터 7월까지 실시하였다. 총 406개의 제품을 조사하였으며, 유형별로 즉석밥류 (45개), 컵밥류 (64개), 냉동밥류 (188개), 덮밥류 (32개), 김밥류 (38개), 삼각김밥류(39개)의 총 6개 유형으로 분류하였다. 본 연구에서 조사한 총 406개의 HMR 밥류 중 26.1%인 106개 제품에 인분 표시가 있었다. 1회 제공 포장중량은 덮밥류가 297.1 g으로, 컵밥류 (264.0 g), 냉동밥류 (239.5 g), 김밥류 (230.2 g), 즉석밥류 (193.4 g), 삼각김밥류 (121.6 g)에 비하여 유의적으로 높았다 (p < 0.001). 1회 제공 포장중량 당 에너지 또한 덮밥류가 496.0 kcal로, 냉동밥류 (407.1 kcal), 김밥류 (384.2 kcal), 컵밥류 (370.2 kcal), 즉석밥류 (285.7 kcal), 삼각김밥류(218.1 kcal)에 비하여 유의적으로 높았다 (p < 0.001). 1회 제공 포장중량 당 나트륨 함량은 김밥류가 1,021.8 mg으로 가장 높았으며, 덮밥류 (968.2 mg), 컵밥류 (910.2 mg), 냉동밥류 (884.7 mg), 삼각김밥류 (529.9 mg), 즉석밥류 (37.4 mg)의 순으로 나타났다 (p < 0.001). 1회 제공 포장중량 당 가격은 덮밥류가 4,333.8원으로 가장 비쌌으며, 컵밥류 (3,007.8원), 냉동밥류 (2,728.6원), 김밥류 (2,500.0원), 즉석밥류 (2,066.6원), 삼각김밥류 (1,212.8원)의 순으로 나타났다 (p < 0.001). 1회 제공 포장중량 당 1일 영양성분 기준치에 대한 기여율은 모든 유형에서 에너지의 경우 평균 10-25%, 단백질의 경우 10-30%, 나트륨의 경우 2-51%의 범위로 나타났으며, 탄수화물의 경우 평균 15% 내외 (11-22%)의 기여율을 보였다. 밥의 1인 1회 분량인 210 g을 기준으로 HMR 밥류 210 g당 영양소 함량을 평가한 결과, 1일 영양성분 기준치에 대한 비율은 에너지의 경우 삼각김밥류가 18.8%로 가장 높았으며, 냉동밥류 17.9%, 김밥류 17.5%, 덮밥류 17.4%, 즉석밥류 15.3%, 컵밥류 14.9%의 순으로 나타났다 (p < 0.001). 나트륨의 경우 김밥류(46.5%)와 삼각김밥류 (46.1%)가 유의적으로 높았다 (p < 0.001). 당류의 경우 평균 0.6-5.3%, 지방의 경우 3.1-19.4%, 단백질의 경우 11.3-21.4%의 범위로 나타났다. 본 연구는 편의 표집 방법을 이용하였기 때문에 현재 시장에 유통되고 있는 모든 제품을 포함하지 못한 제한점을 가지고 있으나, 매출액에 기반한 대표 제품과 지역적 제한이 없는 온라인 시장을 포함한 조사로 최대한 조사 시점의 시장 제품을 포함하고자 하였다. 이를 통해 HMR 밥류의 에너지와 영양소 함량 및 가격이 유형별 차이를 보이고, 동일 유형내에서도 변동 범위가 넓어 소비자들이 섭취를 위한 선택 시 섭취 목적과 조합하는 식품의 유무와 종류에 따라 영양표시 정보를 확인하고, 정보에 기반한 바람직한 선택이 이루어질 수 있도록 해야 할 것이다. 또한 HMR 밥류의 1 포장량 당 제공량을 동일하게 적용한 평가에서도 HMR 밥류의 유형에 따라 에너지 및 에너지 영양소의 구성에 차이가 있고 나트륨의 함량에 있어서도 큰 차이를 보여, 유형별 밥류의 주된 섭취 용도 (식사/간식)와 형태 (단독/조합) 등에 따른 최종 영양 제공량을 고려한 1인 1회 제공 포장량의 다양성이 필요할 것으로 사료된다.

충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법 (Adaptive RFID anti-collision scheme using collision information and m-bit identification)

  • 이제율;신종민;양동민
    • 인터넷정보학회논문지
    • /
    • 제14권5호
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification)시스템은 하나의 RFDI리더, 다수의 RFID태그 장치들로 이루어진 비접촉방식의 근거리 무선 인식 기술이다. RFID태그는 자체적인 연산 수행이 가능한 능동형 태그와 이에 비해 성능은 떨어지지만 저렴한 가격으로 물류 유통에 적합한 수동형 태그로 나눌 수 있다. 데이터 처리 장치는 리더와 연결되어 리더가 전송받은 정보를 처리한다. RFID 시스템은 무선주파수를 이용해 다수의 태그를 빠른 시간에 인식할 수 있다. RFID시스템은 유통, 물류, 운송, 물품관리, 출입 통제, 금융 등 다양한 분야에서 응용되고 있다. 하지만 RFID시스템을 더욱 확산시키기 위해서는 가격, 크기, 전력소모, 보안 등 해결할 문제가 많다. 그 문제들 중에서 본 논문에서는 다수의 수동형 태그를 인식할 때 발생하는 충돌 문제를 해결하기 위한 알고리즘을 제안한다. RFID 시스템에서 다수의 태그를 인식하기 위한 충돌 방지 기법에는 확률적인 방식과 결정적인 방식 그리고 이를 혼합한 하이브리드 방식이 있다. 본 논문에서는 우선 기존에 있던 확률적 방식의 충돌방지기법인 알로하 기반 프로토콜과 결정적 방식의 충돌방지기법인 트리 기반 프로토콜에 대해 소개한다. 알로하 기반 프로토콜은 시간을 슬롯 단위로 나누고 태그들이 각자 임의로 슬롯을 선택하여 자신의 ID를 전송하는 방식이다. 하지만 알로하 기반 프로토콜은 태그가 슬롯을 선택하는 것이 확률적이기 때문에 모든 태그를 인식하는 것을 보장하지 못한다. 반면, 트리 기반의 프로토콜은 리더의 전송 범위 내에 있는 모든 태그를 인식하는 것을 보장한다. 트리 기반의 프로토콜은 리더가 태그에게 질의 하면 태그가 리더에게 응답하는 방식으로 태그를 인식한다. 리더가 질의 할 때, 두 개 이상의 태그가 응답 한다면 충돌이라고 한다. 충돌이 발생하면 리더는 새로운 질의를 만들어 태그에게 전송한다. 즉, 충돌이 자주 발생하면 새로운 질의를 자주 생성해야하기 때문에 속도가 저하된다. 그렇기 때문에 다수의 태그를 빠르게 인식하기 위해서는 충돌을 줄일 수 있는 효율적인 알고리즘이 필요하다. 모든 RFID태그는 96비트의 EPC(Electronic Product Code)의 태그ID를 가진다. 이렇게 제작된 다수의 태그들은 회사 또는 제조업체에 따라 동일한 프리픽스를 가진 유사한 태그ID를 가지게 된다. 이 경우 쿼리 트리 프로토콜을 이용하여 다수의 태그를 인식 하는 경우 충돌이 자주 일어나게 된다. 그 결과 질의-응답 수는 증가하고 유휴 노드가 발생하여 식별 효율 및 속도에 큰 영향을 미치게 된다. 이 문제를 해결하기 위해 충돌 트리 프로토콜과 M-ary 쿼리 트리 프로토콜이 제안되었다. 하지만 충돌 트리 프로토콜은 쿼리 트리 프로토콜과 마찬가지로 한번에 1비트씩 밖에 인식을 못한다는 단점이 있다. 그리고 유사한 태그ID들이 다수 존재할 경우, M-ary 쿼리 트리 프로토콜을 이용해 인식 하면, 불필요한 질의-응답이 증가한다. 본 논문에서는 이러한 문제를 해결하고자 M-ary 쿼리 트리 프로토콜의 매핑 함수를 이용한 m-비트 인식, 맨체스터 코딩을 이용한 태그 ID의 충돌정보, M-ary 쿼리 트리의 깊이를 하나 감소시킬 수 있는 예측 기법을 이용하여 성능을 향상시킨 적응형 M-ary 쿼리트리 프로토콜을 제안한다. 본 논문에서는 기존의 트리기반의 프로토콜과 제안하는 기법을 동일한 조건으로 실험하여 비교 분석 하였다. 그 결과 제안하는 기법은 식별시간, 식별효율 등에서 다른 기법들보다 성능이 우수하다.

서비스제공자와 사용자의 인식차이 분석을 통한 소셜커머스 핵심성공요인에 대한 연구: 한국의 티켓몬스터 중심으로 (A Study on the Critical Success Factors of Social Commerce through the Analysis of the Perception Gap between the Service Providers and the Users: Focused on Ticket Monster in Korea)

  • 김일중;이대철;임규건
    • Asia pacific journal of information systems
    • /
    • 제24권2호
    • /
    • pp.211-232
    • /
    • 2014
  • Recently, there is a growing interest toward social commerce using SNS(Social Networking Service), and the size of its market is also expanding due to popularization of smart phones, tablet PCs and other smart devices. Accordingly, various studies have been attempted but it is shown that most of the previous studies have been conducted from perspectives of the users. The purpose of this study is to derive user-centered CSF(Critical Success Factor) of social commerce from the previous studies and analyze the CSF perception gap between social commerce service providers and users. The CSF perception gap between two groups shows that there is a difference between ideal images the service providers hope for and the actual image the service users have on social commerce companies. This study provides effective improvement directions for social commerce companies by presenting current business problems and its solution plans. For this, This study selected Korea's representative social commerce business Ticket Monster, which is dominant in sales and staff size together with its excellent funding power through M&A by stock exchange with the US social commerce business Living Social with Amazon.com as a shareholder in August, 2011, as a target group of social commerce service provider. we have gathered questionnaires from both service providers and the users from October 22, 2012 until October 31, 2012 to conduct an empirical analysis. We surveyed 160 service providers of Ticket Monster We also surveyed 160 social commerce users who have experienced in using Ticket Monster service. Out of 320 surveys, 20 questionaries which were unfit or undependable were discarded. Consequently the remaining 300(service provider 150, user 150)were used for this empirical study. The statistics were analyzed using SPSS 12.0. Implications of the empirical analysis result of this study are as follows: First of all, There are order differences in the importance of social commerce CSF between two groups. While service providers regard Price Economic as the most important CSF influencing purchasing intention, the users regard 'Trust' as the most important CSF influencing purchasing intention. This means that the service providers have to utilize the unique strong point of social commerce which make the customers be trusted rathe than just focusing on selling product at a discounted price. It means that service Providers need to enhance effective communication skills by using SNS and play a vital role as a trusted adviser who provides curation services and explains the value of products through information filtering. Also, they need to pay attention to preventing consumer damages from deceptive and false advertising. service providers have to create the detailed reward system in case of a consumer damages caused by above problems. It can make strong ties with customers. Second, both service providers and users tend to consider that social commerce CSF influencing purchasing intention are Price Economic, Utility, Trust, and Word of Mouth Effect. Accordingly, it can be learned that users are expecting the benefit from the aspect of prices and economy when using social commerce, and service providers should be able to suggest the individualized discount benefit through diverse methods using social network service. Looking into it from the aspect of usefulness, service providers are required to get users to be cognizant of time-saving, efficiency, and convenience when they are using social commerce. Therefore, it is necessary to increase the usefulness of social commerce through the introduction of a new management strategy, such as intensification of search engine of the Website, facilitation in payment through shopping basket, and package distribution. Trust, as mentioned before, is the most important variable in consumers' mind, so it should definitely be managed for sustainable management. If the trust in social commerce should fall due to consumers' damage case due to false and puffery advertising forgeries, it could have a negative influence on the image of the social commerce industry in general. Instead of advertising with famous celebrities and using a bombastic amount of money on marketing expenses, the social commerce industry should be able to use the word of mouth effect between users by making use of the social network service, the major marketing method of initial social commerce. The word of mouth effect occurring from consumers' spontaneous self-marketer's duty performance can bring not only reduction effect in advertising cost to a service provider but it can also prepare the basis of discounted price suggestion to consumers; in this context, the word of mouth effect should be managed as the CSF of social commerce. Third, Trade safety was not derived as one of the CSF. Recently, with e-commerce like social commerce and Internet shopping increasing in a variety of methods, the importance of trade safety on the Internet also increases, but in this study result, trade safety wasn't evaluated as CSF of social commerce by both groups. This study judges that it's because both service provider groups and user group are perceiving that there is a reliable PG(Payment Gateway) which acts for e-payment of Internet transaction. Accordingly, it is understood that both two groups feel that social commerce can have a corporate identity by website and differentiation in products and services in sales, but don't feel a big difference by business in case of e-payment system. In other words, trade safety should be perceived as natural, basic universal service. Fourth, it's necessary that service providers should intensify the communication with users by making use of social network service which is the major marketing method of social commerce and should be able to use the word of mouth effect between users. The word of mouth effect occurring from consumers' spontaneous self- marketer's duty performance can bring not only reduction effect in advertising cost to a service provider but it can also prepare the basis of discounted price suggestion to consumers. in this context, it is judged that the word of mouth effect should be managed as CSF of social commerce. In this paper, the characteristics of social commerce are limited as five independent variables, however, if an additional study is proceeded with more various independent variables, more in-depth study results will be derived. In addition, this research targets social commerce service providers and the users, however, in the consideration of the fact that social commerce is a two-sided market, drawing CSF through an analysis of perception gap between social commerce service providers and its advertisement clients would be worth to be dealt with in a follow-up study.

참여자관점에서 공급사슬관리 시스템의 성공에 영향을 미치는 요인에 관한 실증연구 (An Empirical Study on the Determinants of Supply Chain Management Systems Success from Vendor's Perspective)

  • 강성배;문태수;정윤
    • Asia pacific journal of information systems
    • /
    • 제20권3호
    • /
    • pp.139-166
    • /
    • 2010
  • The supply chain management (SCM) systems have emerged as strong managerial tools for manufacturing firms in enhancing competitive strength. Despite of large investments in the SCM systems, many companies are not fully realizing the promised benefits from the systems. A review of literature on adoption, implementation and success factor of IOS (inter-organization systems), EDI (electronic data interchange) systems, shows that this issue has been examined from multiple theoretic perspectives. And many researchers have attempted to identify the factors which influence the success of system implementation. However, the existing studies have two drawbacks in revealing the determinants of systems implementation success. First, previous researches raise questions as to the appropriateness of research subjects selected. Most SCM systems are operating in the form of private industrial networks, where the participants of the systems consist of two distinct groups: focus companies and vendors. The focus companies are the primary actors in developing and operating the systems, while vendors are passive participants which are connected to the system in order to supply raw materials and parts to the focus companies. Under the circumstance, there are three ways in selecting the research subjects; focus companies only, vendors only, or two parties grouped together. It is hard to find researches that use the focus companies exclusively as the subjects probably due to the insufficient sample size for statistic analysis. Most researches have been conducted using the data collected from both groups. We argue that the SCM success factors cannot be correctly indentified in this case. The focus companies and the vendors are in different positions in many areas regarding the system implementation: firm size, managerial resources, bargaining power, organizational maturity, and etc. There are no obvious reasons to believe that the success factors of the two groups are identical. Grouping the two groups also raises questions on measuring the system success. The benefits from utilizing the systems may not be commonly distributed to the two groups. One group's benefits might be realized at the expenses of the other group considering the situation where vendors participating in SCM systems are under continuous pressures from the focus companies with respect to prices, quality, and delivery time. Therefore, by combining the system outcomes of both groups we cannot measure the system benefits obtained by each group correctly. Second, the measures of system success adopted in the previous researches have shortcoming in measuring the SCM success. User satisfaction, system utilization, and user attitudes toward the systems are most commonly used success measures in the existing studies. These measures have been developed as proxy variables in the studies of decision support systems (DSS) where the contribution of the systems to the organization performance is very difficult to measure. Unlike the DSS, the SCM systems have more specific goals, such as cost saving, inventory reduction, quality improvement, rapid time, and higher customer service. We maintain that more specific measures can be developed instead of proxy variables in order to measure the system benefits correctly. The purpose of this study is to find the determinants of SCM systems success in the perspective of vendor companies. In developing the research model, we have focused on selecting the success factors appropriate for the vendors through reviewing past researches and on developing more accurate success measures. The variables can be classified into following: technological, organizational, and environmental factors on the basis of TOE (Technology-Organization-Environment) framework. The model consists of three independent variables (competition intensity, top management support, and information system maturity), one mediating variable (collaboration), one moderating variable (government support), and a dependent variable (system success). The systems success measures have been developed to reflect the operational benefits of the SCM systems; improvement in planning and analysis capabilities, faster throughput, cost reduction, task integration, and improved product and customer service. The model has been validated using the survey data collected from 122 vendors participating in the SCM systems in Korea. To test for mediation, one should estimate the hierarchical regression analysis on the collaboration. And moderating effect analysis should estimate the moderated multiple regression, examines the effect of the government support. The result shows that information system maturity and top management support are the most important determinants of SCM system success. Supply chain technologies that standardize data formats and enhance information sharing may be adopted by supply chain leader organization because of the influence of focal company in the private industrial networks in order to streamline transactions and improve inter-organization communication. Specially, the need to develop and sustain an information system maturity will provide the focus and purpose to successfully overcome information system obstacles and resistance to innovation diffusion within the supply chain network organization. The support of top management will help focus efforts toward the realization of inter-organizational benefits and lend credibility to functional managers responsible for its implementation. The active involvement, vision, and direction of high level executives provide the impetus needed to sustain the implementation of SCM. The quality of collaboration relationships also is positively related to outcome variable. Collaboration variable is found to have a mediation effect between on influencing factors and implementation success. Higher levels of inter-organizational collaboration behaviors such as shared planning and flexibility in coordinating activities were found to be strongly linked to the vendors trust in the supply chain network. Government support moderates the effect of the IS maturity, competitive intensity, top management support on collaboration and implementation success of SCM. In general, the vendor companies face substantially greater risks in SCM implementation than the larger companies do because of severe constraints on financial and human resources and limited education on SCM systems. Besides resources, Vendors generally lack computer experience and do not have sufficient internal SCM expertise. For these reasons, government supports may establish requirements for firms doing business with the government or provide incentives to adopt, implementation SCM or practices. Government support provides significant improvements in implementation success of SCM when IS maturity, competitive intensity, top management support and collaboration are low. The environmental characteristic of competition intensity has no direct effect on vendor perspective of SCM system success. But, vendors facing above average competition intensity will have a greater need for changing technology. This suggests that companies trying to implement SCM systems should set up compatible supply chain networks and a high-quality collaboration relationship for implementation and performance.

빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로 (An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework)

  • 가회광;김진수
    • Asia pacific journal of information systems
    • /
    • 제24권4호
    • /
    • pp.443-472
    • /
    • 2014
  • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.

Variation of Hospital Costs and Product Heterogeneity

  • Shin, Young-Soo
    • Journal of Preventive Medicine and Public Health
    • /
    • 제11권1호
    • /
    • pp.123-127
    • /
    • 1978
  • The major objective of this research is to identify those hospital characteristics that best explain cost variation among hospitals and to formulate linear models that can predict hospital costs. Specific emphasis is placed on hospital output, that is, the identification of diagnosis related patient groups (DRGs) which are medically meaningful and demonstrate similar patterns of hospital resource consumption. A casemix index is developed based on the DRGs identified. Considering the common problems encountered in previous hospital cost research, the following study requirements are estab-lished for fulfilling the objectives of this research: 1. Selection of hospitals that exercise similar medical and fiscal practices. 2. Identification of an appropriate data collection mechanism in which demographic and medical characteristics of individual patients as well as accurate and comparable cost information can be derived. 3. Development of a patient classification system in which all the patients treated in hospitals are able to be split into mutually exclusive categories with consistent and stable patterns of resource consumption. 4. Development of a cost finding mechanism through which patient groups' costs can be made comparable across hospitals. A data set of Medicare patients prepared by the Social Security Administration was selected for the study analysis. The data set contained 27,229 record abstracts of Medicare patients discharged from all but one short-term general hospital in Connecticut during the period from January 1, 1971, to December 31, 1972. Each record abstract contained demographic and diagnostic information, as well as charges for specific medical services received. The 'AUT-OGRP System' was used to generate 198 DRGs in which the entire range of Medicare patients were split into mutually exclusive categories, each of which shows a consistent and stable pattern of resource consumption. The 'Departmental Method' was used to generate cost information for the groups of Medicare patients that would be comparable across hospitals. To fulfill the study objectives, an extensive analysis was conducted in the following areas: 1. Analysis of DRGs: in which the level of resource use of each DRG was determined, the length of stay or death rate of each DRG in relation to resource use was characterized, and underlying patterns of the relationships among DRG costs were explained. 2. Exploration of resource use profiles of hospitals; in which the magnitude of differences in the resource uses or death rates incurred in the treatment of Medicare patients among the study hospitals was explored. 3. Casemix analysis; in which four types of casemix-related indices were generated, and the significance of these indices in the explanation of hospital costs was examined. 4. Formulation of linear models to predict hospital costs of Medicare patients; in which nine independent variables (i. e., casemix index, hospital size, complexity of service, teaching activity, location, casemix-adjusted death. rate index, occupancy rate, and casemix-adjusted length of stay index) were used for determining factors in hospital costs. Results from the study analysis indicated that: 1. The system of 198 DRGs for Medicare patient classification was demonstrated not only as a strong tool for determining the pattern of hospital resource utilization of Medicare patients, but also for categorizing patients by their severity of illness. 2. The wei틴fed mean total case cost (TOTC) of the study hospitals for Medicare patients during the study years was $11,27.02 with a standard deviation of $117.20. The hospital with the highest average TOTC ($1538.15) was 2.08 times more expensive than the hospital with the lowest average TOTC ($743.45). The weighted mean per diem total cost (DTOC) of the study hospitals for Medicare patients during the sutdy years was $107.98 with a standard deviation of $15.18. The hospital with the highest average DTOC ($147.23) was 1.87 times more expensive than the hospital with the lowest average DTOC ($78.49). 3. The linear models for each of the six types of hospital costs were formulated using the casemix index and the eight other hospital variables as the determinants. These models explained variance to the extent of 68.7 percent of total case cost (TOTC), 63.5 percent of room and board cost (RMC), 66.2 percent of total ancillary service cost (TANC), 66.3 percent of per diem total cost (DTOC), 56.9 percent of per diem room and board cost (DRMC), and 65.5 percent of per diem ancillary service cost (DTANC). The casemix index alone explained approximately one half of interhospital cost variation: 59.1 percent for TOTC and 44.3 percent for DTOC. Thsee results demonstrate that the casemix index is the most importand determinant of interhospital cost variation Future research and policy implications in regard to the results of this study is envisioned in the following three areas: 1. Utilization of casemix related indices in the Medicare data systems. 2. Refinement of data for hospital cost evaluation. 3. Development of a system for reimbursement and cost control in hospitals.

  • PDF

운영연구(OR)의 도서관응용 -그 몇가지 잠재적응용분야에 대하여- (The Application of Operations Research to Librarianship : Some Research Directions)

  • 최성진
    • 한국문헌정보학회지
    • /
    • 제4권
    • /
    • pp.43-71
    • /
    • 1975
  • Operations research has developed rapidly since its origins in World War II. Practitioners of O. R. have contributed to almost every aspect of government and business. More recently, a number of operations researchers have turned their attention to library and information systems, and the author believes that significant research has resulted. It is the purpose of this essay to introduce the library audience to some of these accomplishments, to present some of the author's hypotheses on the subject of library management to which he belives O. R. has great potential, and to suggest some future research directions. Some problem areas in librianship where O. R. may play a part have been discussed and are summarized below. (1) Library location. It is usually necessary to make balance between accessibility and cost In location problems. Many mathematical methods are available for identifying the optimal locations once the balance between these two criteria has been decided. The major difficulties lie in relating cost to size and in taking future change into account when discriminating possible solutions. (2) Planning new facilities. Standard approaches to using mathematical models for simple investment decisions are well established. If the problem is one of choosing the most economical way of achieving a certain objective, one may compare th althenatives by using one of the discounted cash flow techniques. In other situations it may be necessary to use of cost-benefit approach. (3) Allocating library resources. In order to allocate the resources to best advantage the librarian needs to know how the effectiveness of the services he offers depends on the way he puts his resources. The O. R. approach to the problems is to construct a model representing effectiveness as a mathematical function of levels of different inputs(e.g., numbers of people in different jobs, acquisitions of different types, physical resources). (4) Long term planning. Resource allocation problems are generally concerned with up to one and a half years ahead. The longer term certainly offers both greater freedom of action and greater uncertainty. Thus it is difficult to generalize about long term planning problems. In other fields, however, O. R. has made a significant contribution to long range planning and it is likely to have one to make in librarianship as well. (5) Public relations. It is generally accepted that actual and potential users are too ignorant both of the range of library services provided and of how to make use of them. How should services be brought to the attention of potential users? The answer seems to lie in obtaining empirical evidence by controlled experiments in which a group of libraries participated. (6) Acquisition policy. In comparing alternative policies for acquisition of materials one needs to know the implications of each service which depends on the stock. Second is the relative importance to be ascribed to each service for each class of user. By reducing the level of the first, formal models will allow the librarian to concentrate his attention upon the value judgements which will be necessary for the second. (7) Loan policy. The approach to choosing between loan policies is much the same as the previous approach. (8) Manpower planning. For large library systems one should consider constructing models which will permit the skills necessary in the future with predictions of the skills that will be available, so as to allow informed decisions. (9) Management information system for libraries. A great deal of data can be available in libraries as a by-product of all recording activities. It is particularly tempting when procedures are computerized to make summary statistics available as a management information system. The values of information to particular decisions that may have to be taken future is best assessed in terms of a model of the relevant problem. (10) Management gaming. One of the most common uses of a management game is as a means of developing staff's to take decisions. The value of such exercises depends upon the validity of the computerized model. If the model were sufficiently simple to take the form of a mathematical equation, decision-makers would probably able to learn adequately from a graph. More complex situations require simulation models. (11) Diagnostics tools. Libraries are sufficiently complex systems that it would be useful to have available simple means of telling whether performance could be regarded as satisfactory which, if it could not, would also provide pointers to what was wrong. (12) Data banks. It would appear to be worth considering establishing a bank for certain types of data. It certain items on questionnaires were to take a standard form, a greater pool of data would de available for various analysis. (13) Effectiveness measures. The meaning of a library performance measure is not readily interpreted. Each measure must itself be assessed in relation to the corresponding measures for earlier periods of time and a standard measure that may be a corresponding measure in another library, the 'norm', the 'best practice', or user expectations.

  • PDF

Perspective of breaking stagnation of soybean yield under monsoon climate

  • Shiraiwa, Tatsuhiko
    • 한국작물학회:학술대회논문집
    • /
    • 한국작물학회 2017년도 9th Asian Crop Science Association conference
    • /
    • pp.8-9
    • /
    • 2017
  • Soybean yield has been low and unstable in Japan and other areas in East Asia, despite long history of cultivation. This is contrasting with consistent increase of yield in North and South America. This presentation tries to describe perspective of breaking stagnation of soybean yield in East Asia, considering the factors of the different yields between regions. Large amount of rainfall with occasional dry-spell in the summer is a nature of monsoon climate and as frequently stated excess water is the factor of low and unstable soybean yield. For example, there exists a great deal of field-to-field variation in yield of 'Tanbaguro' soybean, which is reputed for high market value and thus cultivated intensively and this results in low average yield. According to our field survey, a major portion of yield variation occurs in early growth period. Soybean production on drained paddy fields is also vulnerable to drought stress after flowering. An analysis at the above study site demonstrated a substantial field-to-field variation of canopy transpiration activity in the mid-summer, but the variation of pod-set was not as large as that of early growth. As frequently mentioned by the contest winners of good practice farming, avoidance of excess water problem in the early growth period is of greatest importance. A series of technological development took place in Japan in crop management for stable crop establishment and growth, that includes seed-bed preparation with ridge and/or chisel ploughing, adjustment of seed moisture content, seed treatment with mancozeb+metalaxyl and the water table control system, FOEAS. A unique success is seen in the tidal swamp area in South Sumatra with the Saturated Soil Culture (SSC), which is for managing acidity problem of pyrite soils. In 2016, an average yield of $2.4tha^{-1}$ was recorded for a 450 ha area with SSC (Ghulamahdi 2017, personal communication). This is a sort of raised bed culture and thus the moisture condition is kept markedly stable during growth period. For genetic control, too, many attempts are on-going for better emergence and plant growth after emergence under excess water. There seems to exist two aspects of excess water resistance, one related to phytophthora resistance and the other with better growth under excess water. The improvement for the latter is particularly challenging and genomic approach is expected to be effectively utilized. The crop model simulation would estimate/evaluate the impact of environmental and genetic factors. But comprehensive crop models for soybean are mainly for cultivations on upland fields and crop response to excess water is not fully accounted for. A soybean model for production on drained paddy fields under monsoon climate is demanded to coordinate technological development under changing climate. We recently recognized that the yield potential of recent US cultivars is greater than that of Japanese cultivars and this also may be responsible for different yield trends. Cultivar comparisons proved that higher yields are associated with greater biomass production specifically during early seed filling, in which high and well sustained activity of leaf gas exchange is related. In fact, the leaf stomatal conductance is considered to have been improved during last a couple of decades in the USA through selections for high yield in several crop species. It is suspected that priority to product quality of soybean as food crop, especially large seed size in Japan, did not allow efficient improvement of productivity. We also recently found a substantial variation of yielding performance under an environment of Indonesia among divergent cultivars from tropical and temperate regions through in a part biomass productivity. Gas exchange activity again seems to be involved. Unlike in North America where transpiration adjustment is considered necessary to avoid terminal drought, under the monsoon climate with wet summer plants with higher activity of gas exchange than current level might be advantageous. In order to explore higher or better-adjusted canopy function, the methodological development is demanded for canopy-level evaluation of transpiration activity. The stagnation of soybean yield would be broken through controlling variable water environment and breeding efforts to improve the quality-oriented cultivars for stable and high yield.

  • PDF