• Title/Summary/Keyword: Design Speed

Search Result 9,482, Processing Time 0.048 seconds

A Study on 21st Century Fashion Market in Korea (21세기 한국패션시장에 대한 연구)

  • Kim, Hye-Young
    • The Journal of Natural Sciences
    • /
    • v.10 no.1
    • /
    • pp.209-216
    • /
    • 1998
  • The results of the study of diving the 21st century's Korea fashion market into consumer market, fashion market, and a new marketing strategy are as follows. The 21st consumer market is First, a fashion democracy phenomenon. As many people try to leave unconditional fashion following, consumer show a phenomenon to choose and create their own fashion by subjective judgements. Second, a phenomenon of total fashion pursuit. Consumer in the future are likely to put their goals not in differentiating small item products, but considering various fashion elements based on their individuality and sense of value. Third, world quality-oriented. With the improvement of life level, it accomplishes to emphasize consumers' fashion mind on the world wide popular use of materials, quality, design and brand image. Fourth, with the entrance of neo-rationalism, consumers show increasing trends to emphasize wisdom, solidity in goods strategy pursuing high quality fashion and to demand resonable prices. Fifth, concept-oriented. Consumers are changing into pursuing concept appropriate to individual life scene. Prospecting the composition of the 21st century's fashion market, First, sportive casual zone will draw attention more than any other zone. This is because interest in sports will grow according to the increase of leisure time and the expasion of time and space in the 21st century, and also ecology will become the important issue of sports sense because of human beings's natural habit toward nature. Second, the down aging phenomenon will accelerate its speed as a big trend. Third, a retro phenomenon, a concept contrary to digital and high-tech, will become another big trend for its remake, antique, and classic concept in fashion market with ecology trend. New marketing strategy to cope with changing fashion market is as follows. First, with the trend of borderless concept, borders between apparels are becoming vague, for example, they offer custom-made products to consumers. Second, as more enterprises take the way of gorilla and guerrilla where guerrillas who aim at niche market show up will develop. Basically, they think highly of individual creative study, and pursue the scene adherence with high sensitiveness. However this polarization becomes mutually-supplementing relationship showing gorilla's guerilla movement, and guerilla's gorilla high-tech. Third with the development of value retailing, enterprises pursuing mass merchandising of groups called category killers are expanded and amplified to new product fields, and expand business' share. Fourth, using outsourcing, the trend to use exterior function leaving each enterprise's strength by inspecting its own work is gradually strong. Fifth, with the expansion of none store sale, the entrance of the internet and the CD-ROM sales added to communication sales such as catalogues are specified. An eminent American think tank expect that 5-5% of the total sale of clothes and home goods in 2010 will be done by none store sale. Accordingly, to overcome the problems, First international, global level marketing, Second, the improvement of technology, Third, knowledge-creating marketing are needed.

  • PDF

Enhanced Production of Carboxymethylcellulase by a Newly Isolated Marine Microorganism Bacillus atrophaeus LBH-18 Using Rice Bran, a Byproduct from the Rice Processing Industry (미강을 이용한 해양미생물 Bacillus atrophaeus LBH-18 유래의 carboxymethylcellulase 생산의 최적화)

  • Kim, Yi-Joon;Cao, Wa;Lee, Yu-Jeong;Lee, Sang-Un;Jeong, Jeong-Han;Lee, Jin-Woo
    • Journal of Life Science
    • /
    • v.22 no.10
    • /
    • pp.1295-1306
    • /
    • 2012
  • A microorganism producing carboxymethylcellulase (CMCase) was isolated from seawater and identified as Bacillus atrophaeus. This species was designated as B. atrophaeus LBH-18 based on its evolutionary distance and the phylogenetic tree resulting from 16S rDNA sequencing and the neighbor-joining method. The optimal conditions for rice bran (68.1 g/l), peptone (9.1 g/l), and initial pH (7.0) of the medium for cell growth was determined by Design Expert Software based on the response surface method; conditions for production of CMCase were 55.2 g/l, 6.6 g/l, and 7.1, respectively. The optimal temperature for cell growth and the production of CMCase by B. atrophaeus LBH-18 was $30^{\circ}C$. The optimal conditions of agitation speed and aeration rate for cell growth in a 7-l bioreactor were 324 rpm and 0.9 vvm, respectively, whereas those for production of CMCase were 343 rpm and 0.6 vvm, respectively. The optimal inner pressure for cell growth and production of CMCase in a 100-l bioreactor was 0.06 MPa. Maximal production of CMCase under optimal conditions in a 100-l bioreactor was 127.5 U/ml, which was 1.32 times higher than that without an inner pressure. In this study, rice bran was developed as a carbon source for industrial scale production of CMCase by B. atrophaeus LBH-18. Reduced time for the production of CMCase from 7 to 10 days to 3 days by using a bacterial strain with submerged fermentation also resulted in increased productivity of CMCase and a decrease in its production cost.

PST Member Behavior Analysis Based on Three-Dimensional Finite Element Analysis According to Load Combination and Thickness of Grouting Layer (하중조합과 충전층 두께에 따른 3차원 유한요소 해석에 의한 PST 부재의 거동 분석)

  • Seo, Hyun-Su;Kim, Jin-Sup;Kwon, Min-Ho
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.22 no.6
    • /
    • pp.53-62
    • /
    • 2018
  • Follofwing the accelerating speed-up of trains and rising demand for large-volume transfer capacity, not only in Korea, but also around the world, track structures for trains have been improving consistently. Precast concrete slab track (PST), a concrete structure track, was developed as a system that can fulfil new safety and economic requirements for railroad traffic. The purpose of this study is to provide the information required for the development and design of the system in the future, by analyzing the behavior of each structural member of the PST system. The stress distribution result for different combinations of appropriate loads according to the KRL-2012 train load and KRC code was analyzed by conducting a three-dimensional finite element analysis, while the result for different thicknesses of the grouting layer is also presented. Among the structural members, the largest stress took place on the grouting layer. The stress changed sensitively following the thickness and the combination of loads. When compared with a case of applying only a vertical KRL-2012 load, the stress increased by 3.3 times and 14.1 times on a concrete panel and HSB, respectively, from the starting load and temperature load. When the thickness of the grouting layer increased from 20 mm to 80 mm, the stress generated on the concrete panel decreased by 4%, while the stress increased by 24% on the grouting layer. As for the cracking condition, tension cracking was caused locally on the grouting layer. Such a result indicates that more attention should be paid to the flexure and tension behavior from horizontal loads rather than from vertical loads when developing PST systems. In addition, the safety of each structural member must be ensured by maintaining the thickness of the grouting layer at 40 mm or more.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

A study on the change effect of emission regulation mode on vehicle emission gas (배기가스 규제 모드 변화가 차량 배기가스에 미치는 영향 연구)

  • Lee, Min-Ho;Kim, Ki-Ho;Lee, Joung-Min
    • Journal of the Korean Applied Science and Technology
    • /
    • v.35 no.4
    • /
    • pp.1108-1119
    • /
    • 2018
  • As the interest on the air pollution is gradually rising at home and abroad, automotive and fuel researchers have been studied on the exhaust and greenhouse gas emission reduction from vehicles through a lot of approaches, which consist of new engine design, innovative after-treatment systems, using clean (eco-friendly alternative) fuels and fuel quality improvement. This research has brought forward two main issues : exhaust emissions (regulated and non-regulated emissions, PM particle matter) and greenhouse gases of vehicle. Exhaust emissions and greenhouse gases of automotive had many problem such as the cause of ambient pollution, health effects. In order to reduce these emissions, many countries are regulating new exhaust gas test modes. Worldwide harmonized light-duty vehicle test procedure (WLTP) for emission certification has been developed in WP.29 forum in UNECE since 2007. This test procedure was applied to domestic light duty diesel vehicles at the same time as Europe. The air pollutant emissions from light-duty vehicles are regulated by the weight per distance, which the driving cycles can affect the results. Exhaust emissions of vehicle varies substantially based on climate conditions, and driving habits. Extreme outside temperatures tend to increasing the emissions, because more fuel must be used to heat or cool the cabin. Also, high driving speeds increases the emissions because of the energy required to overcome increased drag. Compared with gradual vehicle acceleration, rapid vehicle acceleration increases the emissions. Additional devices (air-conditioner and heater) and road inclines also increases the emissions. In this study, three light-duty vehicles were tested with WLTP, NEDC, and FTP-75, which are used to regulate the emissions of light-duty vehicles, and how much emissions can be affected by different driving cycles. The emissions gas have not shown statistically meaningful difference. The maximum emission gas have been found in low speed phase of WLTP which is mainly caused by cooled engine conditions. The amount of emission gas in cooled engine condition is much different as test vehicles. It means different technical solution requires in this aspect to cope with WLTP driving cycle.

A Model Experiment Study to Secure the Straight Line Distance between the Air Inlet and Exhaust Section of the Living Room (거실제연설비중 공기유입구와 배출구간 직선거리 확보를 위한 모형실험연구)

  • Saeng-Gon Lee;Se-Hong Min
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.2
    • /
    • pp.439-450
    • /
    • 2023
  • Purpose: When conducting fire inspections in Korea, there are objects that violate the fire protection regulations that require a straight line distance of more than 5m between the air inlet and the discharge section if the floor area is less than 400m2, and this paper analyzes the reasons and conducts a model experimental study to support the need for related fire protection regulations. Method: Domestic firefighting objects were investigated and confirmed, domestic and foreign papers, policies, and laws and regulations were examined, and spaces with a straight line distance of less than 5m and more than 5m between the air inlet and discharge section were selected and analyzed through model experiments in a living room of less than 400m2 . Result: When examining the domestic fire protection regulations (NFPCNational Fire Perpormance Code), the separation distance between the air inlet and the outlet is more than 5m when the floor area is less than 400m2 , but as a result of the actual investigation, it was confirmed that there are firefighting objects that cannot keep the separation distance. In addition, when a paper review of overseas fire protection regulations for a straight line distance of more than 5m showed that there was no regulation on the straight line distance between the air inlet and the discharge section, the model experiment showed that the discharge speed was better when the straight line distance between the air inlet and the discharge section was more than 5m than when it was less than 5m. Conclusions: In this study, when examining overseas fire laws and regulations by comparing the performance of the fire protection ratio for the straight line distance between the air inlet and the exhaust section, there is no mandatory regulation for the straight line distance, but the domestic fire protection regulations (NFPCNational Fire Perpormance Code) require more than 5m. It is hoped that this will be reflected in the design stage in the future, and a foundation will be laid to reduce the responsibility and burden of fire superintendents.

Forecasting Substitution and Competition among Previous and New products using Choice-based Diffusion Model with Switching Cost: Focusing on Substitution and Competition among Previous and New Fixed Charged Broadcasting Services (전환 비용이 반영된 선택 기반 확산 모형을 통한 신.구 상품간 대체 및 경쟁 예측: 신.구 유료 방송서비스간 대체 및 경쟁 사례를 중심으로)

  • Koh, Dae-Young;Hwang, Jun-Seok;Oh, Hyun-Seok;Lee, Jong-Su
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.2
    • /
    • pp.223-252
    • /
    • 2008
  • In this study, we attempt to propose a choice-based diffusion model with switching cost, which can be used to forecast the dynamic substitution and competition among previous and new products at both individual-level and aggregate level, especially when market data for new products is insufficient. Additionally, we apply the proposed model to the empirical case of substitution and competition among Analog Cable TV that represents previous fixed charged broadcasting service and Digital Cable TV and Internet Protocol TV (IPTV) that are new ones, verify the validities of our proposed model, and finally derive related empirical implications. For empirical application, we obtained data from survey conducted as follows. Survey was administered by Dongseo Research to 1,000 adults aging from 20 to 60 living in Seoul, Korea, in May of 2007, under the title of 'Demand analysis of next generation fixed interactive broadcasting services'. Conjoint survey modified as follows, was used. First, as the traditional approach in conjoint analysis, we extracted 16 hypothetical alternative cards from the orthogonal design using important attributes and levels of next generation interactive broadcasting services which were determined by previous literature review and experts' comments. Again, we divided 16 conjoint cards into 4 groups, and thus composed 4 choice sets with 4 alternatives each. Therefore, each respondent faces 4 different hypothetical choice situations. In addition to this, we added two ways of modification. First, we asked the respondents to include the status-quo broadcasting services they subscribe to, as another alternative in each choice set. As a result, respondents choose the most preferred alternative among 5 alternatives consisting of 1 alternative with current subscription and 4 hypothetical alternatives in 4 choice sets. Modification of traditional conjoint survey in this way enabled us to estimate the factors related to switching cost or switching threshold in addition to the effects of attributes. Also, by using both revealed preference data(1 alternative with current subscription) and stated preference data (4 hypothetical alternatives), additional advantages in terms of the estimation properties and more conservative and realistic forecast, can be achieved. Second, we asked the respondents to choose the most preferred alternative while considering their expected adoption timing or switching timing. Respondents are asked to report their expected adoption or switching timing among 14 half-year points after the introduction of next generation broadcasting services. As a result, for each respondent, 14 observations with 5 alternatives for each period, are obtained, which results in panel-type data. Finally, this panel-type data consisting of $4{\ast}14{\ast}1000=56000$observations is used for estimation of the individual-level consumer adoption model. From the results obtained by empirical application, in case of forecasting the demand of new products without considering existence of previous product(s) and(or) switching cost factors, it is found that overestimated speed of diffusion at introductory stage or distorted predictions can be obtained, and as such, validities of our proposed model in which both existence of previous products and switching cost factors are properly considered, are verified. Also, it is found that proposed model can produce flexible patterns of market evolution depending on the degree of the effects of consumer preferences for the attributes of the alternatives on individual-level state transition, rather than following S-shaped curve assumed a priori. Empirically, it is found that in various scenarios with diverse combinations of prices, IPTV is more likely to take advantageous positions over Digital Cable TV in obtaining subscribers. Meanwhile, despite inferiorities in many technological attributes, Analog Cable TV, which is regarded as previous product in our analysis, is likely to be substituted by new services gradually rather than abruptly thanks to the advantage in low service charge and existence of high switching cost in fixed charged broadcasting service market.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Multi-Dimensional Analysis Method of Product Reviews for Market Insight (마켓 인사이트를 위한 상품 리뷰의 다차원 분석 방안)

  • Park, Jeong Hyun;Lee, Seo Ho;Lim, Gyu Jin;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.57-78
    • /
    • 2020
  • With the development of the Internet, consumers have had an opportunity to check product information easily through E-Commerce. Product reviews used in the process of purchasing goods are based on user experience, allowing consumers to engage as producers of information as well as refer to information. This can be a way to increase the efficiency of purchasing decisions from the perspective of consumers, and from the seller's point of view, it can help develop products and strengthen their competitiveness. However, it takes a lot of time and effort to understand the overall assessment and assessment dimensions of the products that I think are important in reading the vast amount of product reviews offered by E-Commerce for the products consumers want to compare. This is because product reviews are unstructured information and it is difficult to read sentiment of reviews and assessment dimension immediately. For example, consumers who want to purchase a laptop would like to check the assessment of comparative products at each dimension, such as performance, weight, delivery, speed, and design. Therefore, in this paper, we would like to propose a method to automatically generate multi-dimensional product assessment scores in product reviews that we would like to compare. The methods presented in this study consist largely of two phases. One is the pre-preparation phase and the second is the individual product scoring phase. In the pre-preparation phase, a dimensioned classification model and a sentiment analysis model are created based on a review of the large category product group review. By combining word embedding and association analysis, the dimensioned classification model complements the limitation that word embedding methods for finding relevance between dimensions and words in existing studies see only the distance of words in sentences. Sentiment analysis models generate CNN models by organizing learning data tagged with positives and negatives on a phrase unit for accurate polarity detection. Through this, the individual product scoring phase applies the models pre-prepared for the phrase unit review. Multi-dimensional assessment scores can be obtained by aggregating them by assessment dimension according to the proportion of reviews organized like this, which are grouped among those that are judged to describe a specific dimension for each phrase. In the experiment of this paper, approximately 260,000 reviews of the large category product group are collected to form a dimensioned classification model and a sentiment analysis model. In addition, reviews of the laptops of S and L companies selling at E-Commerce are collected and used as experimental data, respectively. The dimensioned classification model classified individual product reviews broken down into phrases into six assessment dimensions and combined the existing word embedding method with an association analysis indicating frequency between words and dimensions. As a result of combining word embedding and association analysis, the accuracy of the model increased by 13.7%. The sentiment analysis models could be seen to closely analyze the assessment when they were taught in a phrase unit rather than in sentences. As a result, it was confirmed that the accuracy was 29.4% higher than the sentence-based model. Through this study, both sellers and consumers can expect efficient decision making in purchasing and product development, given that they can make multi-dimensional comparisons of products. In addition, text reviews, which are unstructured data, were transformed into objective values such as frequency and morpheme, and they were analysed together using word embedding and association analysis to improve the objectivity aspects of more precise multi-dimensional analysis and research. This will be an attractive analysis model in terms of not only enabling more effective service deployment during the evolving E-Commerce market and fierce competition, but also satisfying both customers.

SANET-CC : Zone IP Allocation Protocol for Offshore Networks (SANET-CC : 해상 네트워크를 위한 구역 IP 할당 프로토콜)

  • Bae, Kyoung Yul;Cho, Moon Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.87-109
    • /
    • 2020
  • Currently, thanks to the major stride made in developing wired and wireless communication technology, a variety of IT services are available on land. This trend is leading to an increasing demand for IT services to vessels on the water as well. And it is expected that the request for various IT services such as two-way digital data transmission, Web, APP, etc. is on the rise to the extent that they are available on land. However, while a high-speed information communication network is easily accessible on land because it is based upon a fixed infrastructure like an AP and a base station, it is not the case on the water. As a result, a radio communication network-based voice communication service is usually used at sea. To solve this problem, an additional frequency for digital data exchange was allocated, and a ship ad-hoc network (SANET) was proposed that can be utilized by using this frequency. Instead of satellite communication that costs a lot in installation and usage, SANET was developed to provide various IT services to ships based on IP in the sea. Connectivity between land base stations and ships is important in the SANET. To have this connection, a ship must be a member of the network with its IP address assigned. This paper proposes a SANET-CC protocol that allows ships to be assigned their own IP address. SANET-CC propagates several non-overlapping IP addresses through the entire network from land base stations to ships in the form of the tree. Ships allocate their own IP addresses through the exchange of simple requests and response messages with land base stations or M-ships that can allocate IP addresses. Therefore, SANET-CC can eliminate the IP collision prevention (Duplicate Address Detection) process and the process of network separation or integration caused by the movement of the ship. Various simulations were performed to verify the applicability of this protocol to SANET. The outcome of such simulations shows us the following. First, using SANET-CC, about 91% of the ships in the network were able to receive IP addresses under any circumstances. It is 6% higher than the existing studies. And it suggests that if variables are adjusted to each port's environment, it may show further improved results. Second, this work shows us that it takes all vessels an average of 10 seconds to receive IP addresses regardless of conditions. It represents a 50% decrease in time compared to the average of 20 seconds in the previous study. Also Besides, taking it into account that when existing studies were on 50 to 200 vessels, this study on 100 to 400 vessels, the efficiency can be much higher. Third, existing studies have not been able to derive optimal values according to variables. This is because it does not have a consistent pattern depending on the variable. This means that optimal variables values cannot be set for each port under diverse environments. This paper, however, shows us that the result values from the variables exhibit a consistent pattern. This is significant in that it can be applied to each port by adjusting the variable values. It was also confirmed that regardless of the number of ships, the IP allocation ratio was the most efficient at about 96 percent if the waiting time after the IP request was 75ms, and that the tree structure could maintain a stable network configuration when the number of IPs was over 30000. Fourth, this study can be used to design a network for supporting intelligent maritime control systems and services offshore, instead of satellite communication. And if LTE-M is set up, it is possible to use it for various intelligent services.