• Title/Summary/Keyword: Multiple Users

Search Result 1,611, Processing Time 0.029 seconds

Preliminary Study on All-in-JPEG with Multi-Content Storage Format extending JPEG (JPEG를 확장한 멀티 콘텐츠 저장 포맷 All-in-JPEG에 관한 예비 연구)

  • Yu-Jin Kim;Kyung-Mi Kim;Song-Yeon Yoo;Chae-Won Park;Kitae Hwang;In-Hwan Jung;Jae-Moon Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.5
    • /
    • pp.183-189
    • /
    • 2023
  • This paper proposes a new JPEG format, All-in-JPEG, which can include not only multiple photos but also various media such as audio and text by extending the JPEG format. All-in-JPEG add images, audio, and text at the existing JPEG file, and stores meta information by utilizing the APP3 segment of JPEG. With All-in-JPEG, smartphone users can save many pictures taken in burst shots in one file, and it is also very convenient to share them with others. In addition, you can create a live photo, such as saving a short audio at the time of taking a photo or moving a part of the photo. In addition, it can be used for various applications such as a photo diary app that stores images, voices, and diary text in a single All-in-JPEG file. In this paper, we developed an app that creates and edits All-in-JPEG, a photo diary app, and a magic photo function, and verified feasibility of the All-in-JPEG through them.

Analysis of the relationship between satisfaction and use value of the amount of construction technology information provided (건설기술정보 제공물량의 만족도 및 사용 가치 간의 관계 분석)

  • Seong Yun Jeong;Jin Uk Kim
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.154-164
    • /
    • 2023
  • The Construction Technology Information System collects, processes, and provides the public with free of charge 16 types of construction technology-related texts or index information, such as construction standards, construction practices, and construction reports, required at construction sites to enhance domestic construction technology. This study judged that user satisfaction could be increased if the budget was allocated first to the most effective construction technology information DB establishment. For each type of construction technology information, the level of satisfaction felt by the population for the quantity provided and the value of use according to the acquisition and utilization of information were investigated. Reliability between independent variables and dependent variables through a multiple regression analysis process to identify how the demographic characteristics of the population affect user satisfaction and the factors that affect the use value of information for each type of construction technology information. and the correlation was analyzed. The correlation between the demographic characteristics of respondents and users' satisfaction with the provided quantity was generally low, but construction experience, age, etc. were found to have an effect on satisfaction with the provided quantity. In particular, among the demographic measurement items of respondents, the more construction experience they had, the higher their satisfaction level with the quantity provided of low-level technical standards, and it was analyzed that older age had an impact on satisfaction with the quantity provided for small and medium-sized business support information.

Microbial Forensics: Comparison of MLVA Results According to NGS Methods, and Forensic DNA Analysis Using MLVA (미생물법의학: 차세대염기서열분석 방법에 따른 MLVA 결과 비교 및 이를 활용한 DNA 감식)

  • Hyeongseok Yun;Seungho Lee;Seunghyun Lim;Daesang Lee;Sehun Gu;Jungeun Kim;Juhwan Jeong;Seongjoo Kim;Gyeunghaeng Hur;Donghyun Song
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.4
    • /
    • pp.507-515
    • /
    • 2024
  • Microbial forensics is a scientific discipline for analyzing evidence related to biological crimes by identifying the origin of microorganisms. Multiple locus variable number tandem repeat analysis(MLVA) is one of the microbiological analysis methods used to specify subtypes within a species based on the number of tandem repeat in the genome, and advances in next generation sequencing(NGS) technology have enabled in silico anlysis of full-length whole genome sequences. In this paper, we analyzed unknown samples provided by Robert Koch Institute(RKI) through The United Nations Secretary-General's Mechanism(UNSGM)'s external quality assessment exercise(EQAE) project, which we officially participated in 2023. We confirmed that the 3 unknown samples were B. anthracis through nucleic acid isolation and genetic sequence analysis studies. MLVA results on 32 loci of B. anthracis were analysed by using genome sequences obtained from NGS(NextSeq and MinION) and Sanger sequencing. The MLVA typing using short-reads based NGS platform(NextSeq) showed a high probability of causing assembly error when a size of the tandem repeats was grater than 200 bp, while long-reads based NGS platform(MinION) showed higher accuracy than NextSeq, although insertion and deletion was observed. We also showed hybrid assembly can correct most indel error caused by MinION. Based on the MLVA results, genetic identification was performed compared to the 2,975 published MLVA databases of B. anthracis, and MLVA results of 10 strains were identical with 3 unkonwn samples. As a result of whole genome alignment of the 10 strains and 3 unknown samples, all samples were identified as B. anthracis strain A4564 which is associated with injectional anthrax isolates in heroin users.

Does Home Oxygen Therapy Slow Down the Progression of Chronic Obstructive Pulmonary Diseases?

  • Han, Kyu-Tae;Kim, Sun Jung;Park, Eun-Cheol;Yoo, Ki-Bong;Kwon, Jeoung A;Kim, Tae Hyun
    • Journal of Hospice and Palliative Care
    • /
    • v.18 no.2
    • /
    • pp.128-135
    • /
    • 2015
  • Purpose: As the National Health Insurance Service (NHIS) began to cover home oxygen therapy (HOT) services from 2006, it is expected that the new services have contributed to overall positive outcome of patients with chronic obstructive pulmonary disease (COPD). We examined whether the usage of HOT has helped slow down the progression of COPD. Methods: We examined hospital claim data (N=10,798) of COPD inpatients who were treated in 2007~2012. We performed ${\chi}^2$ tests to analyze the differences in the changes to respiratory impairment grades. Multiple logistic regression analysis was used to identify factors that are associated with the use of HOT. Finally, a generalized linear mixed model was used to examine association between the HOT treatment and changes to respiratory impairment grades. Results: A total of 2,490 patients had grade 1 respiratory impairment, and patients with grades 2 or 3 totaled 8,308. The OR for use of HOT was lower in grade 3 patients than others (OR: 0.33, 95% CI: 0.30~0.37). The maintenance/mitigation in all grades, those who used HOT had a higher OR than non-users (OR: 1.41, 95% CI: 1.23~1.61). Conclusion: HOT was effective in maintaining or mitigating the respiratory impairment in COPD patients.

Development of Bicycle Accident Prediction Model and Suggestion of Countermeasures on Bicycle Accidents (자전거 사고예측모형 개발 및 개선방안 제시에 관한 연구)

  • Kwon, Sung-Dae;Kim, Yoon-Mi;Kim, Jae-Gon;Ha, Tae-Jun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.35 no.5
    • /
    • pp.1135-1146
    • /
    • 2015
  • This thesis aims to improve the safety of bicycle traffic for activating the use of bicycle, main means of non-powered and non-carbon transportation in order to cope with worldwide crisis such as climate change and energy depletion and to implement sustainable traffic system. In this regard, I analyzed the problem of bicycle roads currently installed and operated, and developed the bicycle accident forecasting model. Following are the processes for this. First, this study presented the current status of bicycle road in Korea as well as accident data, collect the data on bicycle traffic accidents generated throughout the country for recent 3 years (2009~2011) and analyzed the features of bicycle traffic accidents based on the data. Second, this study selected the variable affecting the number of bicycle accidents through accident feature analysis of bicycle accidents at Jeollanam-do, and developed accident forecast model using the multiple regression analysis of 'SPSS Statistics 21'. At this time, the number of accidents due to extension per road types (crossing, crosswalk, other single road) was used. To verify the accident forecast model deduced, this study used the data on bicycle accident generated in Gwangju, 2011, and compared the prediction value with actual number of accidents. As a result, it was found out that reliability of accident forecast model was secured through reconciling with actual number of cases except certain data. Third, this study carried out field survey on the bicycle road as well as questionnaire on satisfaction of bicycle road and use of bicycle for analysis of bicycle road problems, and presented safety improvement measures for the problems deduced as well as bicycle activation plans. This study is considered to serve as the fundamental data for planning and reorganizing of bicycle road in the future, and expected to improve safety of bicycle users and to promote activation of bicycle use as the means of transportation.

A Meta Analysis of Using Structural Equation Model on the Korean MIS Research (국내 MIS 연구에서 구조방정식모형 활용에 관한 메타분석)

  • Kim, Jong-Ki;Jeon, Jin-Hwan
    • Asia pacific journal of information systems
    • /
    • v.19 no.4
    • /
    • pp.47-75
    • /
    • 2009
  • Recently, researches on Management Information Systems (MIS) have laid out theoretical foundation and academic paradigms by introducing diverse theories, themes, and methodologies. Especially, academic paradigms of MIS encourage a user-friendly approach by developing the technologies from the users' perspectives, which reflects the existence of strong causal relationships between information systems and user's behavior. As in other areas in social science the use of structural equation modeling (SEM) has rapidly increased in recent years especially in the MIS area. The SEM technique is important because it provides powerful ways to address key IS research problems. It also has a unique ability to simultaneously examine a series of casual relationships while analyzing multiple independent and dependent variables all at the same time. In spite of providing many benefits to the MIS researchers, there are some potential pitfalls with the analytical technique. The research objective of this study is to provide some guidelines for an appropriate use of SEM based on the assessment of current practice of using SEM in the MIS research. This study focuses on several statistical issues related to the use of SEM in the MIS research. Selected articles are assessed in three parts through the meta analysis. The first part is related to the initial specification of theoretical model of interest. The second is about data screening prior to model estimation and testing. And the last part concerns estimation and testing of theoretical models based on empirical data. This study reviewed the use of SEM in 164 empirical research articles published in four major MIS journals in Korea (APJIS, ISR, JIS and JITAM) from 1991 to 2007. APJIS, ISR, JIS and JITAM accounted for 73, 17, 58, and 16 of the total number of applications, respectively. The number of published applications has been increased over time. LISREL was the most frequently used SEM software among MIS researchers (97 studies (59.15%)), followed by AMOS (45 studies (27.44%)). In the first part, regarding issues related to the initial specification of theoretical model of interest, all of the studies have used cross-sectional data. The studies that use cross-sectional data may be able to better explain their structural model as a set of relationships. Most of SEM studies, meanwhile, have employed. confirmatory-type analysis (146 articles (89%)). For the model specification issue about model formulation, 159 (96.9%) of the studies were the full structural equation model. For only 5 researches, SEM was used for the measurement model with a set of observed variables. The average sample size for all models was 365.41, with some models retaining a sample as small as 50 and as large as 500. The second part of the issue is related to data screening prior to model estimation and testing. Data screening is important for researchers particularly in defining how they deal with missing values. Overall, discussion of data screening was reported in 118 (71.95%) of the studies while there was no study discussing evidence of multivariate normality for the models. On the third part, issues related to the estimation and testing of theoretical models on empirical data, assessing model fit is one of most important issues because it provides adequate statistical power for research models. There were multiple fit indices used in the SEM applications. The test was reported in the most of studies (146 (89%)), whereas normed-test was reported less frequently (65 studies (39.64%)). It is important that normed- of 3 or lower is required for adequate model fit. The most popular model fit indices were GFI (109 (66.46%)), AGFI (84 (51.22%)), NFI (44 (47.56%)), RMR (42 (25.61%)), CFI (59 (35.98%)), RMSEA (62 (37.80)), and NNFI (48 (29.27%)). Regarding the test of construct validity, convergent validity has been examined in 109 studies (66.46%) and discriminant validity in 98 (59.76%). 81 studies (49.39%) have reported the average variance extracted (AVE). However, there was little discussion of direct (47 (28.66%)), indirect, and total effect in the SEM models. Based on these findings, we suggest general guidelines for the use of SEM and propose some recommendations on concerning issues of latent variables models, raw data, sample size, data screening, reporting parameter estimated, model fit statistics, multivariate normality, confirmatory factor analysis, reliabilities and the decomposition of effects.

Analysis of Authority Control System in Collecting Repository -from the case of Archival Management System in Korea Democracy Foundation- (수집형 기록관의 전거제어시스템 분석 - 민주화운동기념사업회 사료관리시스템의 사례를 중심으로 -)

  • Lee, Hyun-Jeong
    • The Korean Journal of Archival Studies
    • /
    • no.13
    • /
    • pp.91-134
    • /
    • 2006
  • In general, personally collected archives, manuscripts, are physically badly conditioned and also contextual of the archives and information on the history of production is mostly collected partly in the manuscripts. Therefore they need to control the name of the producers on the archives collected in various ways effectively and accumulate provenance information which is the key element when understanding the production background in the collecting repository. Here, the authority control and provenance information management must be organized from the beginning of acquisition and this means to collect necessary information considering control process of acquisition as well. This thesis is for verifying the necessity of the authority control in collecting repository and accumulation of the provenance information and for suggesting the things to be considered as collecting Archival authority system. For all these, this thesis shows that it has checked out the necessity of the authority control in archival management and archival authority control and researched the standard of archival authority control, work process and accumulation process. Archival provenance information management and authority control in the archival authority control system are organized through the whole steps of the archival management starting from the lead file to the name of the producers at archival registration and archival description at acquisition. And a lot of information is registered and described at the proper point of time and finally all the information including authority control which controls the Heading in the authority management must be organized to use them as an intellectual management of archives and Finding Aids. The features of the Archival authority system are as follows; first of all, Authority file type which is necessary at the archival authority control of democracy movement is made up of the name of the group, person, affair and terminology(subject name). Second of all, basic record structures and description elements in authority collection of Korea Democracy Foundation Archives apply in the paragraph 1 of ISAAR(CPF) adding some necessary elements and details of description rule such as spacing words and using the periods apply in the paragraph 4 of KCR coping with the features of the archival management system. And also the way of input on the authority record is based on EAC(Encoded Archival Context). Third of all, it made users approach to the sources which they want more easily by connecting the authority terms systemically making it possible to connect the relative terms with up and down words, before and after words variously and concretely expanding the term relations rather than earlier traditional authority system which is usually expressed only with relative words (see also). So the authority control of archival management system can effectively collect and manage the function of various and multiple groups and information on main activities as well as its own function which is controlling the Heading and express the multiple and intermediary relationship between archives and producers or between producers and it also provides them with expanded Record information service which satisfies user's various requests through Indexing service. Finally applying in this international standard ISAAR(CPF) through the instance of the authority management like this, it can be referred to making Archival authority system in Collecting repository hereafter by reorganizing the description elements into appropriate formations and setting up the authority file type which is to be managed properly for every service.

Change Acceptable In-Depth Searching in LOD Cloud for Efficient Knowledge Expansion (효과적인 지식확장을 위한 LOD 클라우드에서의 변화수용적 심층검색)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.171-193
    • /
    • 2018
  • LOD(Linked Open Data) cloud is a practical implementation of semantic web. We suggested a new method that provides identity links conveniently in LOD cloud. It also allows changes in LOD to be reflected to searching results without any omissions. LOD provides detail descriptions of entities to public in RDF triple form. RDF triple is composed of subject, predicates, and objects and presents detail description for an entity. Links in LOD cloud, named identity links, are realized by asserting entities of different RDF triples to be identical. Currently, the identity link is provided with creating a link triple explicitly in which associates its subject and object with source and target entities. Link triples are appended to LOD. With identity links, a knowledge achieves from an LOD can be expanded with different knowledge from different LODs. The goal of LOD cloud is providing opportunity of knowledge expansion to users. Appending link triples to LOD, however, has serious difficulties in discovering identity links between entities one by one notwithstanding the enormous scale of LOD. Newly added entities cannot be reflected to searching results until identity links heading for them are serialized and published to LOD cloud. Instead of creating enormous identity links, we propose LOD to prepare its own link policy. The link policy specifies a set of target LODs to link and constraints necessary to discover identity links to entities on target LODs. On searching, it becomes possible to access newly added entities and reflect them to searching results without any omissions by referencing the link policies. Link policy specifies a set of predicate pairs for discovering identity between associated entities in source and target LODs. For the link policy specification, we have suggested a set of vocabularies that conform to RDFS and OWL. Identity between entities is evaluated in accordance with a similarity of the source and the target entities' objects which have been associated with the predicates' pair in the link policy. We implemented a system "Change Acceptable In-Depth Searching System(CAIDS)". With CAIDS, user's searching request starts from depth_0 LOD, i.e. surface searching. Referencing the link policies of LODs, CAIDS proceeds in-depth searching, next LODs of next depths. To supplement identity links derived from the link policies, CAIDS uses explicit link triples as well. Following the identity links, CAIDS's in-depth searching progresses. Content of an entity obtained from depth_0 LOD expands with the contents of entities of other LODs which have been discovered to be identical to depth_0 LOD entity. Expanding content of depth_0 LOD entity without user's cognition of such other LODs is the implementation of knowledge expansion. It is the goal of LOD cloud. The more identity links in LOD cloud, the wider content expansions in LOD cloud. We have suggested a new way to create identity links abundantly and supply them to LOD cloud. Experiments on CAIDS performed against DBpedia LODs of Korea, France, Italy, Spain, and Portugal. They present that CAIDS provides appropriate expansion ratio and inclusion ratio as long as degree of similarity between source and target objects is 0.8 ~ 0.9. Expansion ratio, for each depth, depicts the ratio of the entities discovered at the depth to the entities of depth_0 LOD. For each depth, inclusion ratio illustrates the ratio of the entities discovered only with explicit links to the entities discovered only with link policies. In cases of similarity degrees with under 0.8, expansion becomes excessive and thus contents become distorted. Similarity degree of 0.8 ~ 0.9 provides appropriate amount of RDF triples searched as well. Experiments have evaluated confidence degree of contents which have been expanded in accordance with in-depth searching. Confidence degree of content is directly coupled with identity ratio of an entity, which means the degree of identity to the entity of depth_0 LOD. Identity ratio of an entity is obtained by multiplying source LOD's confidence and source entity's identity ratio. By tracing the identity links in advance, LOD's confidence is evaluated in accordance with the amount of identity links incoming to the entities in the LOD. While evaluating the identity ratio, concept of identity agreement, which means that multiple identity links head to a common entity, has been considered. With the identity agreement concept, experimental results present that identity ratio decreases as depth deepens, but rebounds as the depth deepens more. For each entity, as the number of identity links increases, identity ratio rebounds early and reaches at 1 finally. We found out that more than 8 identity links for each entity would lead users to give their confidence to the contents expanded. Link policy based in-depth searching method, we proposed, is expected to contribute to abundant identity links provisions to LOD cloud.

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

A digital Audio Watermarking Algorithm using 2D Barcode (2차원 바코드를 이용한 오디오 워터마킹 알고리즘)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.97-107
    • /
    • 2011
  • Nowadays there are a lot of issues about copyright infringement in the Internet world because the digital content on the network can be copied and delivered easily. Indeed the copied version has same quality with the original one. So, copyright owners and content provider want a powerful solution to protect their content. The popular one of the solutions was DRM (digital rights management) that is based on encryption technology and rights control. However, DRM-free service was launched after Steve Jobs who is CEO of Apple proposed a new music service paradigm without DRM, and the DRM is disappeared at the online music market. Even though the online music service decided to not equip the DRM solution, copyright owners and content providers are still searching a solution to protect their content. A solution to replace the DRM technology is digital audio watermarking technology which can embed copyright information into the music. In this paper, the author proposed a new audio watermarking algorithm with two approaches. First, the watermark information is generated by two dimensional barcode which has error correction code. So, the information can be recovered by itself if the errors fall into the range of the error tolerance. The other one is to use chirp sequence of CDMA (code division multiple access). These make the algorithm robust to the several malicious attacks. There are many 2D barcodes. Especially, QR code which is one of the matrix barcodes can express the information and the expression is freer than that of the other matrix barcodes. QR code has the square patterns with double at the three corners and these indicate the boundary of the symbol. This feature of the QR code is proper to express the watermark information. That is, because the QR code is 2D barcodes, nonlinear code and matrix code, it can be modulated to the spread spectrum and can be used for the watermarking algorithm. The proposed algorithm assigns the different spread spectrum sequences to the individual users respectively. In the case that the assigned code sequences are orthogonal, we can identify the watermark information of the individual user from an audio content. The algorithm used the Walsh code as an orthogonal code. The watermark information is rearranged to the 1D sequence from 2D barcode and modulated by the Walsh code. The modulated watermark information is embedded into the DCT (discrete cosine transform) domain of the original audio content. For the performance evaluation, I used 3 audio samples, "Amazing Grace", "Oh! Carol" and "Take me home country roads", The attacks for the robustness test were MP3 compression, echo attack, and sub woofer boost. The MP3 compression was performed by a tool of Cool Edit Pro 2.0. The specification of MP3 was CBR(Constant Bit Rate) 128kbps, 44,100Hz, and stereo. The echo attack had the echo with initial volume 70%, decay 75%, and delay 100msec. The sub woofer boost attack was a modification attack of low frequency part in the Fourier coefficients. The test results showed the proposed algorithm is robust to the attacks. In the MP3 attack, the strength of the watermark information is not affected, and then the watermark can be detected from all of the sample audios. In the sub woofer boost attack, the watermark was detected when the strength is 0.3. Also, in the case of echo attack, the watermark can be identified if the strength is greater and equal than 0.5.