• Title/Summary/Keyword: Field performance

Search Result 9,499, Processing Time 0.036 seconds

Two Faces of Entrepreneurial Leadership: The Paradoxical Effect Reflecting Followers' Regulatory Focus (기업가적 리더십의 양면성: 구성원의 조절 초점 성향에 따른 패러독스 효과)

  • Sang-Jib Kwon
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.4
    • /
    • pp.165-175
    • /
    • 2023
  • In venture creation research, studying 'entrepreneurial leadership' is important for uncovering and comprehending the underlying causal process in innovative behavior performance. Although previous studies provide that entrepreneurial leadership enhances followers' innovative behavior, there is few research on entrepreneurial leadership and followers' characteristics interaction. The present study's focus is paradoxical effects of entrepreneurial leadership on self-efficacy and innovative behavior. On the basis of individual regulatory focus, this study suggests that interaction effects of entrepreneurial leadership and followers' regulatory focus differed in promotion view and prevention view followers' innovative behavior. To strengthen the casual mechanism, this study conducted in priming experiment method using employees in SMEs. This study used a 2(entrepreneurial leadership vs. control) x 2 (regulatory focus: promotion vs. prevention) between-participants design. The results of this study provide that (1) Individuals in promotion focus especially benefited from entrepreneurial leadership in terms of its effect on their self-efficacy and innovative behavior; (2) whereas entrepreneurial leadership was negatively related to self-efficacy and innovative behavior of followers' prevention focus. In sum, results of the present study supporting evidence for hypotheses, combined effect of entrepreneurial leadership and regulatory focus on innovative behavior through self-efficacy. Experimental results confirmed hypotheses of this study, revealing that promotion focus show more innovative behavior than prevention focus when their leaders' leadership style is entrepreneurial leadership. Also, the paradoxical effect of entrepreneurial leadership and regulatory focus of followers on innovative behavior was mediated by followers' self-efficacy. This study helps explain how leaders' entrepreneurial leadership boost followers' innovative behavior, particularly for those employees who have promotion focus. The current study contributes to the theory of entrepreneurial leadership and regulatory focus and innovation literature. Findings of this study shed light on the organizational processes that shape innovative behavior in venture/startup corporations and provide contributions for venture business field.

  • PDF

Analysis of research trends for utilization of P-MFC as an energy source for nature-based solutions - Focusing on co-occurring word analysis using VOSviewer - (자연기반해법의 에너지원으로서 P-MFC 활용을 위한 연구경향 분석 - VOSviewer를 활용한 동시 출현단어 분석 중심으로 -)

  • Mi-Li Kwon;Gwon-Soo Bahn
    • Journal of Wetlands Research
    • /
    • v.26 no.1
    • /
    • pp.41-50
    • /
    • 2024
  • Plant Microbial Fuel Cells (P-MFCs) are biomass-based energy technologies that generate electricity from plant and root microbial communities and are suitable for natural fundamental solutions considering sustainable environments. In order to develop P-MFC technology suitable for domestic waterfront space, it is necessary to analyze international research trends first. Therefore, in this study, 700 P-MFC-related research papers were investigated in Web of Science, and the core keywords were derived using VOSviewer, a word analysis program, and the research trends were analyzed. First, P-MFC-related research has been on the rise since 1998, especially since the mid to late 2010s. The number of papers submitted by each country was "China," "U.S." and "India." Since the 2010s, interest in P-MFCs has increased, and the number of publications in the Philippines, Ukraine, and Mexico, which have abundant waterfront space and wetland environments, is increasing. Secondly, from the perspective of research trends in different periods, 1998-2015 mainly carried out microbial fuel cell performance verification research in different environments. The 2016-2020 period focuses on the specific conditions of microbial fuel cell use, the structure of P-MFC and how it develops. From 2021 to 2023, specific research on constraints and efficiency improvement in the development of P-MFC was carried out. The P-MFC-related international research trends identified through this study can be used as useful data for developing technologies suitable for domestic waterfront space in the future. In addition to this study, further research is needed on research trends and levels in subsectors, and in order to develop and revitalize P-MFC technologies in Korea, research on field applicability should be expanded and policies and systems improved.

The Effect of Information Quality and System Quality on Knowledge Service Competence: Focusing on Knowledge Service Types (지식서비스의 정보품질과 시스템품질이 지식서비스 역량에 미치는 영향: 지식서비스 유형을 중심으로)

  • Geun-Wan Park;Hyun-Ji Park;Sung-Hoon Mo;Cheol-Hyun Lim;Hee-Seok Choi;Seok-Hyoung Lee;Hye-Jin Lee;Seung-June Hwang;Chang-Hee Han
    • Information Systems Review
    • /
    • v.21 no.4
    • /
    • pp.1-29
    • /
    • 2019
  • The knowledge resources take a role in promoting the sustainable growth of organization. Therefore, it is important for the members of organization to acquire knowledge consistently so that the company can continue to grow. Knowledge service is the field that provides information and infrastructure which enable the members of organization to acquire new knowledge. As we recognized the importance of knowledge services, we analyzed the level of knowledge service management and development through the impact of knowledge quality on user capabilities. First, the matrix of knowledge patterns was presented based on the type of information and the level of customer interaction. According to patterns, the knowledge service was classified into three types of information providing, information analysis, and infrastructure, and then the results of structural model analysis were presented for each type. It found that the impact of knowledge service quality on user competence was different according to the type of service. The results suggested new indicators for measuring the performance of knowledge services, and provided information for reconstructing services based on the user considering the integrated operation of knowledge service and organizational designing knowledge service.

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.1
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.

Characteristics and Implications of Sports Content Business of Big Tech Platform Companies : Focusing on Amazon.com (빅테크 플랫폼 기업의 스포츠콘텐츠 사업의 특징과 시사점 : 아마존을 중심으로)

  • Shin, Jae-hyoo
    • Journal of Venture Innovation
    • /
    • v.7 no.1
    • /
    • pp.1-15
    • /
    • 2024
  • This study aims to elucidate the characteristics of big tech platform companies' sports content business in an environment of rapid digital transformation. Specifically, this study examines the market structure of big tech platform companies with a focus on Amazon, revealing the role of sports content within this structure through an analysis of Amazon's sports marketing business and provides an outlook on the sports content business of big tech platform companies. Based on two-sided market platform business models, big tech platform companies incorporate sports content as a strategy to enhance the value of their platforms. Therefore, sports content is used as a tool to enhance the value of their platforms and to consolidate their monopoly position by maximizing profits by increasing the synergy of platform ecosystems such as infrastructure. Amazon acquires popular live sports broadcasting rights on a continental or national basis and supplies them to its platforms, which not only increases the number of new customers and purchasing effects, but also provides IT solution services to sports organizations and teams while planning and supplying various promotional contents, thus creates synergy across Amazon's platforms including its advertising business. Amazon also expands its business opportunities and increases its overall value by supplying live sports contents to Amazon Prime Video and Amazon Prime, providing technical services to various stakeholders through Amazon Web Services, and offering Amazon Marketing Cloud services for analyzing and predicting advertisers' advertising and marketing performance. This gives rise to a new paradigm in the sports marketing business in the digital era, stemming from the difference in market structure between big tech companies based on two-sided market platforms and legacy global companies based on one-sided markets. The core of this new model is a business through the development of various contents based on live sports streaming rights, and sports content marketing will become a major field of sports marketing along with traditional broadcasting rights and sponsorship. Big tech platform global companies such as Amazon, Apple, and Google have the potential to become new global sports marketing companies, and the current sports marketing and advertising companies, as well as teams and leagues, are facing both crises and opportunities.

Factor Analysis Affecting on Chartering Decision-making in the Dry Bulk Shipping Market (부정기 건화물선 시장에서 용선 의사결정에 영향을 미치는 요인 분석)

  • Lee, Choong-Ho;Park, Keun-Sik
    • Journal of Korea Port Economic Association
    • /
    • v.40 no.1
    • /
    • pp.151-163
    • /
    • 2024
  • This study sought to confirm the impact of analytical methods and behavioral economic theory factors on decision-making when making chartering decisions in the dry bulk shipping market. This study on chartering decision-making model was began to verify why shipping companies do not make rational decision-making and behavior based on analytical methods such as freight prediction and process of alternative selection in the same market situation. To understand the chartering decision-making model, it is necessary to study the impact of behavioral economic theory such as heuristics, loss aversion, and herding behavior on chartering decision-making. Through AHP analysis, the importance of the method factors relied upon in chartering decision-making. The dependence of the top factors in chartering decision-making was in the following order: market factors, heuristics, internal factors, herding behavior, and loss aversion. Market factors, heuristics, and internal factors. As for detailed factors, spot freight index and empirical intuition were confirmed as the most important factors relied on when making decisions. It was confirmed that empirical intuition is more important than internal analysis, which is an analytical method. This study can be said to be meaningful in that it academically researched and proved the bounded rationality of humans, which cannot be fully rational, and sometimes relies on experience or psychological tendencies, by applying it to the chartering decision-making model in the dry bulk shipping market. It also suggests that in the dry bulk shipping market, which is uncertain and has a high risk of loss due to decision-making, the experience and insight of decision makers have a very important impact on the performance and business profits of the operation part of shipping companies. Even though chartering are a decision-making field that requires judgment and intuition based on heuristics, decision-makers need to be aware of this decision-making model in order to reduce repeated mistakes of deciding contrary to market situation. It also suggests that there is a need to internally research analytical methods and procedures that can complement heuristics such as empirical intuition.

Comparison of Seedling Quality of Cucumber Seedlings and Growth and Production after Transplanting according to Differences in Seedling Production Systems (육묘 생산 시스템 차이에 따른 오이 모종의 묘소질과 정식 후 생육 비교)

  • Soon Jae Hyeon;Hwi Chan Yang;Young Ho Kim;Yun Hyeong Bae;Dong Cheol Jang
    • Journal of Bio-Environment Control
    • /
    • v.33 no.2
    • /
    • pp.88-98
    • /
    • 2024
  • This study provides basic data on the growth and production of seedlings produced in plant factories with artificial lighting by comparing seedling quality, growth and fruit characteristics, and production after transplanting cucumber seedlings according to environmental differences between plant factories with artificial lighting and conventional nurseries in greenhouse. The control group consisted of greenhouse seedlings (GH) grown in the conventional nursery before transplanting. Plant factory to greenhouse seedlings (PG) were grown for 9 days in a plant factory with artificial lighting and for 13 days in an conventional nursery. Plant factory seedlings (PF) were grown in a plant factory with artificial lighting for 22 days until planting. In terms of seedling quality, PFs had the highest relative growth rate and compactness and the best root zone development. After transplanting PFs tended to grow faster, the first harvest date was 2 days earlier than that of GHs, and the growing season ended 1 day earlier. The female flower flowering rate of the PFs was high, and the fruit set rate was of PF the lowest. The production per unit area was highest for PFs at 10.23kg Performance index on the absorption basis, the most sensitive chlorophyll fluorescence parameter, was highest at 4.14 for PFs at 4 weeks after transplantation. By comparing the maximum quantum yield of primary PS II photochemistry and dissipated energy flux per PS II reaction center electron at 4 weeks after transplantation, PFs tended to be the least stressed. PFs had the best seedling quality, growth, and production after planting, and fruit quality was consistent with that of greenhouse seedlings. Therefore, plant factory seedlings can be used in the field.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.