• Title/Summary/Keyword: Power network

Search Result 5,967, Processing Time 0.031 seconds

Development and Validation of the Social Entrepreneurship Measurement Tools: From an Organizational-Level Behavioral Perspective (사회적기업가정신 척도 개발 및 타당화 연구: 조직차원의 행동적 관점에서)

  • Cho, Han Jun
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.3
    • /
    • pp.97-113
    • /
    • 2023
  • In order to generalize the social entrepreneurship model with cooperation orientation and increase the possibility of using the model, this study developed a measurement tool and tested it with 389 executives of social enterprises. For the development of the measurement tool, preliminary measurement items were formed through review of previous studies, and a questionnaire was tentatively composed of 40 measurement items in five areas through an expert panel review of the measurement items. A total of 389 questionnaires were collected by conducting a questionnaire survey targeting Korean social enterprise managers, and exploratory and confirmatory factor analysis were conducted using 375 questionnaires that could be analyzed. Five factors for 24 items were derived through exploratory factor analysis and reliability analysis. Through a series of analysis processes including primary and secondary confirmatory factor analysis, the model fit of the newly constructed social entrepreneurship research model was confirmed, and the validity and reliability of the measurement tools were verified. As a result of this study, the model fit of the social entrepreneurship model(social value orientation; innovativeness; pro-activeness; risk-taking; cooperation orientation) is verified, thereby improving the theoretical explanatory power of social entrepreneurship research and at the same time providing the basis and basis for theoretical expansion of follow-up research. The study proved the possibility of generalizing the social entrepreneurship model with added cooperation orientation, and at the same time, the measurement tool used in this study was widely used as a tool to measure social entrepreneurship theoretically and practically. In addition, it was confirmed that the cooperation orientation is manifested in corporate decision-making and activity behaviors for resource mobilization and capacity building, opportunity and performance creation, social capital and network reinforcement, and governance establishment of social enterprises.

  • PDF

Contactless Data Society and Reterritorialization of the Archive (비접촉 데이터 사회와 아카이브 재영토화)

  • Jo, Min-ji
    • The Korean Journal of Archival Studies
    • /
    • no.79
    • /
    • pp.5-32
    • /
    • 2024
  • The Korean government ranked 3rd among 193 UN member countries in the UN's 2022 e-Government Development Index. Korea, which has consistently been evaluated as a top country, can clearly be said to be a leading country in the world of e-government. The lubricant of e-government is data. Data itself is neither information nor a record, but it is a source of information and records and a resource of knowledge. Since administrative actions through electronic systems have become widespread, the production and technology of data-based records have naturally expanded and evolved. Technology may seem value-neutral, but in fact, technology itself reflects a specific worldview. The digital order of new technologies, armed with hyper-connectivity and super-intelligence, not only has a profound influence on traditional power structures, but also has an a similar influence on existing information and knowledge transmission media. Moreover, new technologies and media, including data-based generative artificial intelligence, are by far the hot topic. It can be seen that the all-round growth and spread of digital technology has led to the augmentation of human capabilities and the outsourcing of thinking. This also involves a variety of problems, ranging from deep fakes and other fake images, auto profiling, AI lies hallucination that creates them as if they were real, and copyright infringement of machine learning data. Moreover, radical connectivity capabilities enable the instantaneous sharing of vast amounts of data and rely on the technological unconscious to generate actions without awareness. Another irony of the digital world and online network, which is based on immaterial distribution and logical existence, is that access and contact can only be made through physical tools. Digital information is a logical object, but digital resources cannot be read or utilized without some type of device to relay it. In that respect, machines in today's technological society have gone beyond the level of simple assistance, and there are points at which it is difficult to say that the entry of machines into human society is a natural change pattern due to advanced technological development. This is because perspectives on machines will change over time. Important is the social and cultural implications of changes in the way records are produced as a result of communication and actions through machines. Even in the archive field, what problems will a data-based archive society face due to technological changes toward a hyper-intelligence and hyper-connected society, and who will prove the continuous activity of records and data and what will be the main drivers of media change? It is time to research whether this will happen. This study began with the need to recognize that archives are not only records that are the result of actions, but also data as strategic assets. Through this, author considered how to expand traditional boundaries and achieves reterritorialization in a data-driven society.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

Recent Research for the Seismic Activities and Crustal Velocity Structure (국내 지진활동 및 지각구조 연구동향)

  • Kim, Sung-Kyun;Jun, Myung-Soon;Jeon, Jeong-Soo
    • Economic and Environmental Geology
    • /
    • v.39 no.4 s.179
    • /
    • pp.369-384
    • /
    • 2006
  • Korean Peninsula, located on the southeastern part of Eurasian plate, belongs to the intraplate region. The characteristics of intraplate earthquake show the low and rare seismicity and the sparse and irregular distribution of epicenters comparing to interplate earthquake. To evaluate the exact seismic activity in intraplate region, long-term seismic data including historical earthquake data should be archived. Fortunately the long-term historical earthquake records about 2,000 years are available in Korea Peninsula. By the analysis of this historical and instrumental earthquake data, seismic activity was very high in 16-18 centuries and is more active at the Yellow sea area than East sea area. Comparing to the high seismic activity of the north-eastern China in 16-18 centuries, it is inferred that seismic activity in two regions shows close relationship. Also general trend of epicenter distribution shows the SE-NW direction. In Korea Peninsula, the first seismic station was installed at Incheon in 1905 and 5 additional seismic stations were installed till 1943. There was no seismic station from 1945 to 1962, but a World Wide Standardized Seismograph was installed at Seoul in 1963. In 1990, Korean Meteorological Adminstration(KMA) had established centralized modem seismic network in real-time, consisted of 12 stations. After that time, many institutes tried to expand their own seismic networks in Korea Peninsula. Now KMA operates 35 velocity-type seismic stations and 75 accelerometers and Korea Institute of Geoscience and Mineral Resources operates 32 and 16 stations, respectively. Korea Institute of Nuclear Safety and Korea Electric Power Research Institute operate 4 and 13 stations, consisted of velocity-type and accelerometer. In and around the Korean Peninsula, 27 intraplate earthquake mechanisms since 1936 were analyzed to understand the regional stress orientation and tectonics. These earthquakes are largest ones in this century and may represent the characteristics of earthquake in this region. Focal mechanism of these earthquakes show predominant strike-slip faulting with small amount of thrust components. The average P-axis is almost horizontal ENE-WSW. In north-eastern China, strike-slip faulting is dominant and nearly horizontal average P-axis in ENE-WSW is very similar with the Korean Peninsula. On the other hand, in the eastern part of East Sea, thrust faulting is dominant and average P-axis is horizontal with ESE-WNW. This indicate that not only the subducting Pacific Plate in east but also the indenting Indian Plate controls earthquake mechanism in the far east of the Eurasian Plate. Crustal velocity model is very important to determine the hypocenters of the local earthquakes. But the crust model in and around Korean Peninsula is not clear till now, because the sufficient seismic data could not accumulated. To solve this problem, reflection and refraction seismic survey and seismic wave analysis method were simultaneously applied to two long cross-section traversing the southern Korean Peninsula since 2002. This survey should be continuously conducted.

Effects of Joining Coalition Loyalty Program : How the Brand affects Brand Loyalty Based on Brand Preference (브랜드 선호에 따라 제휴 로열티 프로그램 가입이 가맹점 브랜드 충성도에 미치는 영향)

  • Rhee, Jin-Hwa
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.87-115
    • /
    • 2012
  • Introduction: In these days, a loyalty program is one of the most common marketing mechanisms (Lacey & Sneath, 2006; Nues & Dreze, 2006; Uncles et al., 20003). In recent years, Coalition Loyalty Program is more noticeable as one of progressed forms. In the past, loyalty program was operating independently by single product brand or single retail channel brand. Now, companies using Coalition Loyalty Program share their programs as one single service and companies to participate to this program continue to have benefits from their existing program as well as positive spillover effect from the other participating network companies. Instead of consumers to earn or spend points from single retail channel or brand, consumers will have more opportunities to utilize their points and be able to purchase other participating companies products. Issues that are related to form of loyalty programs are essentially connected with consumers' perceived view on convenience of using its program. This can be a problem for distribution companies' strategic marketing plan. Although Coalition Loyalty Program is popular corporate marketing strategy to most companies, only few researches have been published. However, compared to independent loyalty program, coalition loyalty program operated by third parties of partnership has following conditions: Companies cannot autonomously modify structures of program for individual companies' benefits, and there is no guarantee to operate and to participate its program continuously by signing a contract. Thus, it is important to conduct the study on how coalition loyalty program affects companies' success and its process as much as conducting the study on effects of independent program. This study will complement the lack of coalition loyalty program study. The purpose of this study is to find out how consumer loyalty affects affiliated brands, its cause and mechanism. The past study about loyalty program only provided the variation of performance analysis, but this study will specifically focus on causes of results. In order to do these, this study is designed and to verify three primary objects as following; First, based on opinions of Switching Barriers (Fornell, 1992; Ping, 1993; Jones, et at., 2000) about causes of loyalty of coalition brand, 'brand attractiveness' and 'brand switching cost' are antecedents and causes of change in 'brand loyalty' will be investigated. Second, influence of consumers' perception and attitude prior to joining coalition loyalty program, influence of program in retail brands, brand attractiveness and spillover effect of switching cost after joining coalition program will be verified. Finally, the study will apply 'prior brand preference' as a variable and will provide a relationship between effects of coalition loyalty program and prior preference level. Hypothesis Hypothesis 1. After joining coalition loyalty program, more preferred brand (compared to less preferred brand) will increase influence on brand attractiveness to brand loyalty. Hypothesis 2. After joining coalition loyalty program, less preferred brand (compared to more preferred brand) will increase influence on brand switching cost to brand loyalty. Hypothesis 3. (1)Brand attractiveness and (2)brand switching cost of more preferred brand (before joining the coalition loyalty program) will influence more positive effects from (1)program attractiveness and (2)program switching cost of coalition loyalty program (after joining) than less preferred brand. Hypothesis 4. After joining coalition loyalty program, (1)brand attractiveness and (2)brand switching cost of more preferred brand will receive more positive impacts from (1)program attractiveness and (2)program switching cost of coalition loyalty program than less preferred brand. Hypothesis 5. After joining coalition loyalty program, (1)brand attractiveness and (2)brand switching cost of more preferred brand will receive less impacts from (1)brand attractiveness and (2)brand switching cost of different brands (having different preference level), which joined simultaneously, than less preferred brand. Method : In order to validate hypotheses, this study will apply experimental method throughout virtual scenario of coalition loyalty program if consumers have used or available for the actual brands. The experiment is conducted twice to participants. In a first experiment, the study will provide six coalition brands which are already selected based on prior research. The survey asked each brand attractiveness, switching cost, and loyalty after they choose high preference brand and low preference brand. One hour break was provided prior to the second experiment. In a second experiment, virtual coalition loyalty program "SaveBag" was introduced to participants. Participants were informed that "SaveBag" will be new alliance with six coalition brands from the first experiment. Brand attractiveness and switching cost about coalition program were measured and brand attractiveness and switching cost of high preference brand and low preference brand were measured as same method of first experiment. Limitation and future research This study shows limitations of effects of coalition loyalty program by using virtual scenario instead of actual research. Thus, future study should compare and analyze CLP panel data to provide more in-depth information. In addition, this study only proved the effectiveness of coalition loyalty program. However, there are two types of loyalty program, which are Single and Coalition, and success of coalition loyalty program will be dependent on market brand power and prior customer attitude. Therefore, it will be interesting to compare effects of two programs in the future.

  • PDF

Context Sharing Framework Based on Time Dependent Metadata for Social News Service (소셜 뉴스를 위한 시간 종속적인 메타데이터 기반의 컨텍스트 공유 프레임워크)

  • Ga, Myung-Hyun;Oh, Kyeong-Jin;Hong, Myung-Duk;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.39-53
    • /
    • 2013
  • The emergence of the internet technology and SNS has increased the information flow and has changed the way people to communicate from one-way to two-way communication. Users not only consume and share the information, they also can create and share it among their friends across the social network service. It also changes the Social Media behavior to become one of the most important communication tools which also includes Social TV. Social TV is a form which people can watch a TV program and at the same share any information or its content with friends through Social media. Social News is getting popular and also known as a Participatory Social Media. It creates influences on user interest through Internet to represent society issues and creates news credibility based on user's reputation. However, the conventional platforms in news services only focus on the news recommendation domain. Recent development in SNS has changed this landscape to allow user to share and disseminate the news. Conventional platform does not provide any special way for news to be share. Currently, Social News Service only allows user to access the entire news. Nonetheless, they cannot access partial of the contents which related to users interest. For example user only have interested to a partial of the news and share the content, it is still hard for them to do so. In worst cases users might understand the news in different context. To solve this, Social News Service must provide a method to provide additional information. For example, Yovisto known as an academic video searching service provided time dependent metadata from the video. User can search and watch partial of video content according to time dependent metadata. They also can share content with a friend in social media. Yovisto applies a method to divide or synchronize a video based whenever the slides presentation is changed to another page. However, we are not able to employs this method on news video since the news video is not incorporating with any power point slides presentation. Segmentation method is required to separate the news video and to creating time dependent metadata. In this work, In this paper, a time dependent metadata-based framework is proposed to segment news contents and to provide time dependent metadata so that user can use context information to communicate with their friends. The transcript of the news is divided by using the proposed story segmentation method. We provide a tag to represent the entire content of the news. And provide the sub tag to indicate the segmented news which includes the starting time of the news. The time dependent metadata helps user to track the news information. It also allows them to leave a comment on each segment of the news. User also may share the news based on time metadata as segmented news or as a whole. Therefore, it helps the user to understand the shared news. To demonstrate the performance, we evaluate the story segmentation accuracy and also the tag generation. For this purpose, we measured accuracy of the story segmentation through semantic similarity and compared to the benchmark algorithm. Experimental results show that the proposed method outperforms benchmark algorithms in terms of the accuracy of story segmentation. It is important to note that sub tag accuracy is the most important as a part of the proposed framework to share the specific news context with others. To extract a more accurate sub tags, we have created stop word list that is not related to the content of the news such as name of the anchor or reporter. And we applied to framework. We have analyzed the accuracy of tags and sub tags which represent the context of news. From the analysis, it seems that proposed framework is helpful to users for sharing their opinions with context information in Social media and Social news.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF