• Title/Summary/Keyword: 3D digital technology

Search Result 1,140, Processing Time 0.028 seconds

The Effects of Financial Information to the Firm Valuation for Information Technology Related Companies : Evidences from Software, Degital Content, Internet Related Companies listed in KOSDAQ (회계정보가 정보기술 관련 산업의 기업가치 평가에 미치는 영향 : 소프트웨어, 디지털콘텐츠, 인터넷 관련 코스닥 상장기업을 중심으로)

  • Kim, Jeong-Yeon
    • The Journal of Society for e-Business Studies
    • /
    • v.17 no.3
    • /
    • pp.73-84
    • /
    • 2012
  • With transition to Knowledge society and introduction of information industry, there are many companies which have higher stock price than the suggested value from its financial information. To explain similar cases in capital markets, many researchers focus on non-financial information such as Web Traffic data or intangible assets such as intellectual property rights rather than traditional financial analysis. Besides, the relationships between financial and non-financial information with firm value are changed according to industry lifecycle. As Industry grows, financial information of company is more important for firm valuation in Capital market. We'd like to review the changes of relationships between financial information and firm valuation in Capital market especially for "Software", "Digital Contents", and "Internet" companies listed in Kosdaq market during 2000~2011. The result of data analysis shows the financial information gets more important after 2007. Inversely, it provides analytical bases that related industry gets mature. Also we show that intangible properties are more relevant to stock price of those technical based companies than others.

Establishment of Crowd Management Safety Measures Based on Crowd Density Risk Simulation (군중 밀집 위험도 시뮬레이션 기반의 인파 관리 안전대책 수립)

  • Hyuncheol Kim;Hyungjun Im;Seunghyun Lee;Youngbeom Ju;Soonjo Kwon
    • Journal of the Korean Society of Safety
    • /
    • v.38 no.2
    • /
    • pp.96-103
    • /
    • 2023
  • Generally, human stampedes and crowd collapses occur when people press against each other, causing falls that may result in death or injury. Particularly, crowd accidents have become increasingly common since the 1990s, with an average of 380 deaths annually. For instance, in Korea, a stampede occurred during the Itaewon Halloween festival on October 29, 2022, when several people crowded onto a narrow, downhill road, which was 45 meters long and between 3.2 and 4 meters wide. Precisely, this stampede was primarily due to the excessive number of people relative to the road size. Essentially, stampedes can occur anywhere and at any time, not just at events, but also in other places where large crowds gather. More specifically, the likelihood of accidents increases when the crowd density exceeds a turbulence threshold of 5-6 /m2. Meanwhile, festivals and events, which have become more frequent and are promoted through social media, garner people from near and far to a specific location. Besides, as cities grow, the number of people gathering in one place increases. While stampedes are rare, their impact is significant, and the uncertainty associated with them is high. Currently, there is no scientific system to analyze the risk of stampedes due to crowd concentration. Consequently, to prevent such accidents, it is essential to prepare for crowd disasters that reflect social changes and regional characteristics. Hence, this study proposes using digital topographic maps and crowd-density risk simulations to develop a 3D model of the region. Specifically, the crowd density simulation allows for an analysis of the density of people walking along specific paths, which enables the prediction of danger areas and the risk of crowding. By using the simulation method in this study, it is anticipated that safety measures can be rationally established for specific situations, such as local festivals, and preparations may be made for crowd accidents in downtown areas.

Spatial Anaylsis of Agro-Environment of North Korea Using Remote Sensing I. Landcover Classification from Landsat TM imagery and Topography Analysis in North Korea (위성영상을 이용한 북한의 농업환경 분석 I. Landsat TM 영상을 이용한 북한의 지형과 토지피복분류)

  • Hong, Suk-Young;Rim, Sang-Kyu;Lee, Seung-Ho;Lee, Jeong-Cheol;Kim, Yi-Hyun
    • Korean Journal of Environmental Agriculture
    • /
    • v.27 no.2
    • /
    • pp.120-132
    • /
    • 2008
  • Remotely sensed images from a satellite can be applied for detecting and quantifying spatial and temporal variations in terms of landuse & landcover, crop growth, and disaster for agricultural applications. The purposes of this study were to analyze topography using DEM(digital elevation model) and classify landuse & landcover into 10 classes-paddy field, dry field, forest, bare land, grass & bush, water body, reclaimed land, salt farm, residence & building, and others-using Landsat TM images in North Korea. Elevation was greater than 1,000 meters in the eastern part of North Korea around Ranggang-do where Kaemagowon was located. Pyeongnam and Hwangnam in the western part of North Korea were low in elevation. Topography of North Korea showed typical 'east-high and west-low' landform characteristics. Landcover classification of North Korea using spectral reflectance of multi-temporal Landsat TM images was performed and the statistics of each landcover by administrative district, slope, and agroclimatic zone were calculated in terms of area. Forest areas accounted for 69.6 percent of the whole area while the areas of dry fields and paddy fields were 15.7 percent and 4.2 percent, respectively. Bare land and water body occupied 6.6 percent and 1.6 percent, respectively. Residence & building reached less than 1 percent of the country. Paddy field areas concentrated in the A slope ranged from 0 to 2 percent(greater than 80 percent). The dry field areas were shown in the A slope the most, followed by D, E, C, B, and F slopes. According to the statistics by agroclimatic zone, paddy and dry fields were mainly distributed in the North plain region(N-6) and North western coastal region(N-7). Forest areas were evenly distributed all over the agroclimatic regions. Periodic landcover analysis of North Korea based on remote sensing technique using satellite imagery can produce spatial and temporal statistics information for future landuse management and planning of North Korea.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Estimation of Rice Canopy Height Using Terrestrial Laser Scanner (레이저 스캐너를 이용한 벼 군락 초장 추정)

  • Dongwon Kwon;Wan-Gyu Sang;Sungyul Chang;Woo-jin Im;Hyeok-jin Bak;Ji-hyeon Lee;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.387-397
    • /
    • 2023
  • Plant height is a growth parameter that provides visible insights into the plant's growth status and has a high correlation with yield, so it is widely used in crop breeding and cultivation research. Investigation of the growth characteristics of crops such as plant height has generally been conducted directly by humans using a ruler, but with the recent development of sensing and image analysis technology, research is being attempted to digitally convert growth measurement technology to efficiently investigate crop growth. In this study, the canopy height of rice grown at various nitrogen fertilization levels was measured using a laser scanner capable of precise measurement over a wide range, and a comparative analysis was performed with the actual plant height. As a result of comparing the point cloud data collected with a laser scanner and the actual plant height, it was confirmed that the estimated plant height measured based on the average height of the top 1% points showed the highest correlation with the actual plant height (R2 = 0.93, RMSE = 2.73). Based on this, a linear regression equation was derived and used to convert the canopy height measured with a laser scanner to the actual plant height. The rice growth curve drawn by combining the actual and estimated plant height collected by various nitrogen fertilization conditions and growth period shows that the laser scanner-based canopy height measurement technology can be effectively utilized for assessing the plant height and growth of rice. In the future, 3D images derived from laser scanners are expected to be applicable to crop biomass estimation, plant shape analysis, etc., and can be used as a technology for digital conversion of conventional crop growth assessment methods.

Investigation into a Prototyping Tool for Interactive Product Design: Development, Application and Feasibility Study of MIDAS (Media Interaction Design Authoring System) (인터랙티브 제품 디자인을 위한 프로토타이핑 도구: MIDAS의 활용 사례 및 유용성 연구)

  • Yim, Ji-Dong;Nam, Tek-Jin
    • Archives of design research
    • /
    • v.19 no.5 s.67
    • /
    • pp.213-222
    • /
    • 2006
  • This paper presents MIDAS (Media Interaction Design Authoring System), an authoring toolkit for designers and artists to develop working prototypes in new interaction design projects. Field research were conducted to identify the requirements and a case study of designing new interactive products was carried out to examine the feasibility of the new tool. MIDAS provides easier ways of integrating hardware and software, to manage a wide range of electric input and output elements and to employ 3D Augmented Reality technology within conventional multimedia authoring tools, such as Director and Flash, which are popularly used by designers. MIDAS was used in case study projects of design education as well as by voluntary designers for evaluation. From the result of case studies, it was found that many design projects were successfully accomplished using MIDAS. Designers who participated in the projects reported that MIDAS not only helped them to concentrate more on ideation but also was very easy to use as they implemented the physical interface concepts without advanced engineering skills. It is expected that MIDAS can also support prototyping in interactive media an, tangible user interface development and related human computer interaction fields.

  • PDF

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.

Evaluation of Application Possibility for Floating Marine Pollutants Detection Using Image Enhancement Techniques: A Case Study for Thin Oil Film on the Sea Surface (영상 강화 기법을 통한 부유성 해양오염물질 탐지 기술 적용 가능성 평가: 해수면의 얇은 유막을 대상으로)

  • Soyeong Jang;Yeongbin Park;Jaeyeop Kwon;Sangheon Lee;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1353-1369
    • /
    • 2023
  • In the event of a disaster accident at sea, the scale of damage will vary due to weather effects such as wind, currents, and tidal waves, and it is obligatory to minimize the scale of damage by establishing appropriate control plans through quick on-site identification. In particular, it is difficult to identify pollutants that exist in a thin film at sea surface due to their relatively low viscosity and surface tension among pollutants discharged into the sea. Therefore, this study aims to develop an algorithm to detect suspended pollutants on the sea surface in RGB images using imaging equipment that can be easily used in the field, and to evaluate the performance of the algorithm using input data obtained from actual waters. The developed algorithm uses image enhancement techniques to improve the contrast between the intensity values of pollutants and general sea surfaces, and through histogram analysis, the background threshold is found,suspended solids other than pollutants are removed, and finally pollutants are classified. In this study, a real sea test using substitute materials was performed to evaluate the performance of the developed algorithm, and most of the suspended marine pollutants were detected, but the false detection area occurred in places with strong waves. However, the detection results are about three times better than the detection method using a single threshold in the existing algorithm. Through the results of this R&D, it is expected to be useful for on-site control response activities by detecting suspended marine pollutants that were difficult to identify with the naked eye at existing sites.

An Empirical Study on How the Moderating Effects of Individual Cultural Characteristics towards a Specific Target Affects User Experience: Based on the Survey Results of Four Types of Digital Device Users in the US, Germany, and Russia (특정 대상에 대한 개인 수준의 문화적 성향이 사용자 경험에 미치는 조절효과에 대한 실증적 연구: 미국, 독일, 러시아의 4개 디지털 기기 사용자를 대상으로)

  • Lee, In-Seong;Choi, Gi-Woong;Kim, So-Lyung;Lee, Ki-Ho;Kim, Jin-Woo
    • Asia pacific journal of information systems
    • /
    • v.19 no.1
    • /
    • pp.113-145
    • /
    • 2009
  • Recently, due to the globalization of the IT(Information Technology) market, devices and systems designed in one country are used in other countries as well. This phenomenon is becoming the key factor for increased interest on cross-cultural, or cross-national, research within the IT area. However, as the IT market is becoming bigger and more globalized, a great number of IT practitioners are having difficulty in designing and developing devices or systems which can provide optimal experience. This is because not only tangible factors such as language and a country's economic or industrial power affect the user experience of a certain device or system but also invisible and intangible factors as well. Among such invisible and intangible factors, the cultural characteristics of users from different countries may affect the user experience of certain devices or systems because cultural characteristics affect how they understand and interpret the devices or systems. In other words, when users evaluate the quality of overall user experience, the cultural characteristics of each user act as a perceptual lens that leads the user to focus on a certain elements of experience. Therefore, there is a need within the IT field to consider cultural characteristics when designing or developing certain devices or systems and plan a strategy for localization. In such an environment, existing IS studies identify the culture with the country, emphasize the importance of culture in a national level perspective, and hypothesize that users within the same country have same cultural characteristics. Under such assumptions, these studies focus on the moderating effects of cultural characteristics on a national level within a certain theoretical framework. This has already been suggested by cross-cultural studies conducted by scholars such as Hofstede(1980) in providing numerical research results and measurement items for cultural characteristics and using such results or items as they increase the efficiency of studies. However, such national level culture has its limitations in forecasting and explaining individual-level behaviors such as voluntary device or system usage. This is because individual cultural characteristics are the outcome of not only the national culture but also the culture of a race, company, local area, family, and other groups that are formulated through interaction within the group. Therefore, national or nationally dominant cultural characteristics may have its limitations in forecasting and explaining the cultural characteristics of an individual. Moreover, past studies in psychology suggest a possibility that there exist different cultural characteristics within a single individual depending on the subject being measured or its context. For example, in relation to individual vs. collective characteristics, which is one of the major cultural characteristics, an individual may show collectivistic characteristics when he or she is with family or friends but show individualistic characteristics in his or her workplace. Therefore, this study acknowledged such limitations of past studies and conducted a research within the framework of 'theoretically integrated model of user satisfaction and emotional attachment', which was developed through a former study, on how the effects of different experience elements on emotional attachment or user satisfaction are differentiated depending on the individual cultural characteristics related to a system or device usage. In order to do this, this study hypothesized the moderating effects of four cultural dimensions (uncertainty avoidance, individualism vs, collectivism, masculinity vs. femininity, and power distance) as suggested by Hofstede(1980) within the theoretically integrated model of emotional attachment and user satisfaction. Statistical tests were then implemented on these moderating effects through conducting surveys with users of four digital devices (mobile phone, MP3 player, LCD TV, and refrigerator) in three countries (US, Germany, and Russia). In order to explain and forecast the behavior of personal device or system users, individual cultural characteristics must be measured, and depending on the target device or system, measurements must be measured independently. Through this suggestion, this study hopes to provide new and useful perspectives for future IS research.

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.