• Title/Summary/Keyword: Good Use

Search Result 6,499, Processing Time 0.049 seconds

Automatic Text Extraction from News Video using Morphology and Text Shape (형태학과 문자의 모양을 이용한 뉴스 비디오에서의 자동 문자 추출)

  • Jang, In-Young;Ko, Byoung-Chul;Kim, Kil-Cheon;Byun, Hye-Ran
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.479-488
    • /
    • 2002
  • In recent years the amount of digital video used has risen dramatically to keep pace with the increasing use of the Internet and consequently an automated method is needed for indexing digital video databases. Textual information, both superimposed and embedded scene texts, appearing in a digital video can be a crucial clue for helping the video indexing. In this paper, a new method is presented to extract both superimposed and embedded scene texts in a freeze-frame of news video. The algorithm is summarized in the following three steps. For the first step, a color image is converted into a gray-level image and applies contrast stretching to enhance the contrast of the input image. Then, a modified local adaptive thresholding is applied to the contrast-stretched image. The second step is divided into three processes: eliminating text-like components by applying erosion, dilation, and (OpenClose+CloseOpen)/2 morphological operations, maintaining text components using (OpenClose+CloseOpen)/2 operation with a new Geo-correction method, and subtracting two result images for eliminating false-positive components further. In the third filtering step, the characteristics of each component such as the ratio of the number of pixels in each candidate component to the number of its boundary pixels and the ratio of the minor to the major axis of each bounding box are used. Acceptable results have been obtained using the proposed method on 300 news images with a recognition rate of 93.6%. Also, my method indicates a good performance on all the various kinds of images by adjusting the size of the structuring element.

Effects of Green Manure Crops of Legume and Gramineae on Growth Responses and Yields in Rice Cultivation with Respect to Environment Friendly Agriculture (친환경농업기술 개발을 위한 벼 재배 시 벼의 생육 및 수량에 대한 두과와 화본과 녹비작물의 효과)

  • Song, Beom-Heon;Lee, Kyung-A;Jeon, Weon-Tai;Kim, Min-Tae;Cho, Hyun-Suk;Oh, In-Seok;Kim, Chung-Guk;Kang, Ui-Gum
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.55 no.2
    • /
    • pp.144-150
    • /
    • 2010
  • The agricultural techniques of environmental friendly using the green manure crops have been required recently more to have the safety agricultural products and to reduce the use of fertilizers and agricultural chemicals. The utilization of green manure crops related closely to cropping system would be very important. The purposes of this study are both to investigate the effects of green manure crops of hairy vetch, legume, and rye, gramineae, and to compare the effects between them in rice cultivation. The hairy vetch and rye were treated as green manure crops into paddy soil at 10 days before the rice transplanting. The plant height was increased gradually from the maximum tillering to the heading growth stage, showing that it was the highest with treatment of conventional cultivation in 2007 and the highest with hairy vetch in 2008. The number of tillers was higher with treatment of hairy vetch and hairy vetch+rye than those with the conventional cultivation. Dry weight was also higher with hairy vetch than that with the conventional, while it was lower clearly in rye than those in hairy vetch and conventional. According to the high tilller number and spikelet number per panicle out of the yield components relatively, the yield of rough rice was increased to about 6% and 8% in 2007 and 2008, respectively, comparing with the yield in the conventional cultivation. Based on these results, the hairy vetch would be a good green manure crop in rice cultivation.

On the Utilization of Inactive BHC isomers -Synthesis of 3-(2,4,5-trichlorophenyl)-1-methyl urea as a herbicide- (BHC 이성질체(異性質體)의 활용(活用)에 관(關)한 연구(硏究) -제초제(除草劑)로서 3-(2,4,5-trichlorophenyl)-1- methyl urea의 합성(合成)-)

  • Lee, Kyu-Seung;Park, Chang-Kyu
    • Applied Biological Chemistry
    • /
    • v.22 no.2
    • /
    • pp.109-122
    • /
    • 1979
  • Present study was carried out to reduce residual toxicity of BHC insecticides inherent in the organochlorine pesticides. For This end, r-isomer, the most potent insecticidal component among the BHC stereoisomers, was isolated and thus fortified by means of solvent precipitation. In parallel, 3-(2,4,5-trichlorophenyl)-1-methyl urea was prepared in good yield from technical BHC via 1,2,4-trichlorobenzene, 1,2,4,-trichloronitrobenzene, and 2,4,5-trichloroaniline. In addition, certain merit of the compound which make it possible to use as a herbicide is discussed. The results are summarized as follows; 1. Recrystallizing technical BHC from methanol-water binary solvent system, r-isomer was enriched to 49.7% at 95% recovery of r-isomer. 2. By partitioning technical BHC in 85% of methanolic solution into chloroform, r-isomer was fortified to 89.6% at 90.5% recovery of r-isomer. 3. Yield of 1,2,4-trichlorobenzene from technical BHC was greatly dependent upon concentration of alkalies and to less degree on the alkalies. 4. Surfactants, in particular cationic a quartenary ammonium salt, increased yield of 1,2,4-trichlorobenzene from technical BHC by alkaline hydrolysis. 5. Conversion of 1,2,4-trichlorobenzene to 2,4,5-trichloronitrobenzene was effected almost quantitatively utilizing $HNO_3-H_2SO_4$ nitrating agent at low temperature. 6. Yield of 91.4% was observed for the synthesis of 2,4,5-trichloroaniline by reducing 2,4,5-trichloronitrobenzene in the presence of iron turning and hydrochloric acid. 7. Overall yield based on BHC of 3-(2,4,5-trichlorophenyl)-1- methyl urea was 60.8%. 8. Inhibition effects, both germination and growth, 3-(2,4,5-trichlorophenyl)-1-methyl urea on several crops were found comparable to or more potent than those of $linuron{\circledR}\;and\;diuron{\circledR}$. In addition, it was also noted that susceptibility to the prepared compound depended upon the crops as well as specific part (shoots, roots) of the plant exposed to the chemicals.

  • PDF

The Statistical Approach-based Intelligent Education Support System (통계적 접근법을 기초로 하는 지능형 교육 지원 시스템)

  • Chung, Jun-Hee
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.109-123
    • /
    • 2012
  • Many kinds of the education systems are provided to students. Many kinds of the contents like School subjects, license, job training education and so on are provided through many kinds of the media like text, image, video and so on. Students will apply the knowledge they learnt and will use it when they learn other things. In the existing education system, there have been many situations that the education system isn't really helpful to the students because too hard contents are transferred to them or because too easy contents are transferred to them and they learn the contents they already know again. To solve this phenomenon, a method that transfers the most proper lecture contents to the students is suggested in the thesis. Because the difficulty is relative, the contents A can be easier than the contents B to a group of the students and the contents B can be easier than the contents A to another group of the students. Therefore, it is not easy to measure the difficulty of the lecture contents. A method considering this phenomenon to transfer the proper lecture contents is suggested in the thesis. The whole lecture contents are divided into many lecture modules. The students solve the pattern recognition questions, a kind of the prior test questions, before studying the lecture contents and the system selects and provides the most proper lecture module among many lecture modules to the students according to the score about the questions. When the system selects the lecture module and transfer it to the student, the students' answer and the difficulty of the lecture modules are considered. In the existing education system, 1 kind of the content is transferred to various students. If the same lecture contents is transferred to various students, the contents will not be transferred efficiently. The system selects the proper contents using the students' pattern recognition answers. The pattern recognition question is a kind of the prior test question that is developed on the basis of the lecture module and used to recognize whether the student knows the contents of the lecture module. Because the difficulty of the lecture module reflects the all scores of the students' answers, whenever a student submits the answer, the difficulty is changed. The suggested system measures the relative knowledge of the students using the answers and designates the difficulty. The improvement of the suggested method is only applied when the order of the lecture contents has nothing to do with the progress of the lecture. If the contents of the unit 1 should be studied before studying the contents of the unit 2, the suggested method is not applied. The suggested method is introduced on the basis of the subject "English grammar", subjects that the order is not important, in the thesis. If the suggested method is applied properly to the education environment, the students who don't know enough basic knowledge will learn the basic contents well and prepare the basis to learn the harder lecture contents. The students who already know the lecture contents will not study those again and save more time to learn more various lecture contents. Many improvement effects like these and so on will be provided to the education environment. If the suggested method that is introduced on the basis of the subject "English grammar" is applied to the various education systems like primary education, secondary education, job education and so on, more improvement effects will be provided. The direction to realize these things is suggested in the thesis. The suggested method is realized with the MySQL database and Java, JSP program. It will be very good if the suggested method is researched developmentally and become helpful to the development of the Korea education.

Object Tracking Based on Exactly Reweighted Online Total-Error-Rate Minimization (정확히 재가중되는 온라인 전체 에러율 최소화 기반의 객체 추적)

  • JANG, Se-In;PARK, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.53-65
    • /
    • 2019
  • Object tracking is one of important steps to achieve video-based surveillance systems. Object tracking is considered as an essential task similar to object detection and recognition. In order to perform object tracking, various machine learning methods (e.g., least-squares, perceptron and support vector machine) can be applied for different designs of tracking systems. In general, generative methods (e.g., principal component analysis) were utilized due to its simplicity and effectiveness. However, the generative methods were only focused on modeling the target object. Due to this limitation, discriminative methods (e.g., binary classification) were adopted to distinguish the target object and the background. Among the machine learning methods for binary classification, total error rate minimization can be used as one of successful machine learning methods for binary classification. The total error rate minimization can achieve a global minimum due to a quadratic approximation to a step function while other methods (e.g., support vector machine) seek local minima using nonlinear functions (e.g., hinge loss function). Due to this quadratic approximation, the total error rate minimization could obtain appropriate properties in solving optimization problems for binary classification. However, this total error rate minimization was based on a batch mode setting. The batch mode setting can be limited to several applications under offline learning. Due to limited computing resources, offline learning could not handle large scale data sets. Compared to offline learning, online learning can update its solution without storing all training samples in learning process. Due to increment of large scale data sets, online learning becomes one of essential properties for various applications. Since object tracking needs to handle data samples in real time, online learning based total error rate minimization methods are necessary to efficiently address object tracking problems. Due to the need of the online learning, an online learning based total error rate minimization method was developed. However, an approximately reweighted technique was developed. Although the approximation technique is utilized, this online version of the total error rate minimization could achieve good performances in biometric applications. However, this method is assumed that the total error rate minimization can be asymptotically achieved when only the number of training samples is infinite. Although there is the assumption to achieve the total error rate minimization, the approximation issue can continuously accumulate learning errors according to increment of training samples. Due to this reason, the approximated online learning solution can then lead a wrong solution. The wrong solution can make significant errors when it is applied to surveillance systems. In this paper, we propose an exactly reweighted technique to recursively update the solution of the total error rate minimization in online learning manner. Compared to the approximately reweighted online total error rate minimization, an exactly reweighted online total error rate minimization is achieved. The proposed exact online learning method based on the total error rate minimization is then applied to object tracking problems. In our object tracking system, particle filtering is adopted. In particle filtering, our observation model is consisted of both generative and discriminative methods to leverage the advantages between generative and discriminative properties. In our experiments, our proposed object tracking system achieves promising performances on 8 public video sequences over competing object tracking systems. The paired t-test is also reported to evaluate its quality of the results. Our proposed online learning method can be extended under the deep learning architecture which can cover the shallow and deep networks. Moreover, online learning methods, that need the exact reweighting process, can use our proposed reweighting technique. In addition to object tracking, the proposed online learning method can be easily applied to object detection and recognition. Therefore, our proposed methods can contribute to online learning community and object tracking, detection and recognition communities.

Modern Paper Quality Control

  • Olavi Komppa
    • Proceedings of the Korea Technical Association of the Pulp and Paper Industry Conference
    • /
    • 2000.06a
    • /
    • pp.16-23
    • /
    • 2000
  • The increasing functional needs of top-quality printing papers and packaging paperboards, and especially the rapid developments in electronic printing processes and various computer printers during past few years, set new targets and requirements for modern paper quality. Most of these paper grades of today have relatively high filler content, are moderately or heavily calendered , and have many coating layers for the best appearance and performance. In practice, this means that many of the traditional quality assurance methods, mostly designed to measure papers made of pure. native pulp only, can not reliably (or at all) be used to analyze or rank the quality of modern papers. Hence, introduction of new measurement techniques is necessary to assure and further develop the paper quality today and in the future. Paper formation , i.e. small scale (millimeter scale) variation of basis weight, is the most important quality parameter of paper-making due to its influence on practically all the other quality properties of paper. The ideal paper would be completely uniform so that the basis weight of each small point (area) measured would be the same. In practice, of course, this is not possible because there always exists relatively large local variations in paper. However, these small scale basis weight variations are the major reason for many other quality problems, including calender blacking uneven coating result, uneven printing result, etc. The traditionally used visual inspection or optical measurement of the paper does not give us a reliable understanding of the material variations in the paper because in modern paper making process the optical behavior of paper is strongly affected by using e.g. fillers, dye or coating colors. Futhermore, the opacity (optical density) of the paper is changed at different process stages like wet pressing and calendering. The greatest advantage of using beta transmission method to measure paper formation is that it can be very reliably calibrated to measure true basis weight variation of all kinds of paper and board, independently on sample basis weight or paper grade. This gives us the possibility to measure, compare and judge papers made of different raw materials, different color, or even to measure heavily calendered, coated or printed papers. Scientific research of paper physics has shown that the orientation of the top layer (paper surface) fibers of the sheet paly the key role in paper curling and cockling , causing the typical practical problems (paper jam) with modern fax and copy machines, electronic printing , etc. On the other hand, the fiber orientation at the surface and middle layer of the sheet controls the bending stiffness of paperboard . Therefore, a reliable measurement of paper surface fiber orientation gives us a magnificent tool to investigate and predict paper curling and coclking tendency, and provides the necessary information to finetune, the manufacturing process for optimum quality. many papers, especially heavily calendered and coated grades, do resist liquid and gas penetration very much, bing beyond the measurement range of the traditional instruments or resulting invonveniently long measuring time per sample . The increased surface hardness and use of filler minerals and mechanical pulp make a reliable, nonleaking sample contact to the measurement head a challenge of its own. Paper surface coating causes, as expected, a layer which has completely different permeability characteristics compared to the other layer of the sheet. The latest developments in sensor technologies have made it possible to reliably measure gas flow in well controlled conditions, allowing us to investigate the gas penetration of open structures, such as cigarette paper, tissue or sack paper, and in the low permeability range analyze even fully greaseproof papers, silicon papers, heavily coated papers and boards or even detect defects in barrier coatings ! Even nitrogen or helium may be used as the gas, giving us completely new possibilities to rank the products or to find correlation to critical process or converting parameters. All the modern paper machines include many on-line measuring instruments which are used to give the necessary information for automatic process control systems. hence, the reliability of this information obtained from different sensors is vital for good optimizing and process stability. If any of these on-line sensors do not operate perfectly ass planned (having even small measurement error or malfunction ), the process control will set the machine to operate away from the optimum , resulting loss of profit or eventual problems in quality or runnability. To assure optimum operation of the paper machines, a novel quality assurance policy for the on-line measurements has been developed, including control procedures utilizing traceable, accredited standards for the best reliability and performance.

Result of Cox Maze Procedure with Bipolar Radiofrequency Electrode and Cryoablator for Persistent Atrial Fibrillation - Compared with Cut-sew Technique - (양극고주파전극과 냉동프로브를 이용한 지속성 심방세동의 수술 결과 - 절개/봉합술식과 비교 -)

  • Lee, Mi-Kyung;Choi, Jong-Bum;Lee, Jung-Moon;Kim, Kyung-Hwa;Kim, Min-Ho
    • Journal of Chest Surgery
    • /
    • v.42 no.6
    • /
    • pp.710-718
    • /
    • 2009
  • Background: The Cox maze procedure has been used as a standard surgical treatment for atrial fibrillation for about 20 years. Recently, the creators have used a bipolar radiofrequency electrode (Cox maze IV procedure) instead of the incision and suture (cut-sew) technique to make atrial ablation lesions for persistent atrial fibrillation. We investigated clinical outcomes for the Cox maze procedure with a bipolar radiofrequency electrode and cryoablator in patients with persistent atrial fibrillation, and compared results with clinical outcomes of the cut-sew procedure. Material and Method: Between April 2005 and July 2007, 40 patients with persistent atrial fibrillation underwent Cox maze IV procedure with a bipolar radiofrequency electrode and cryoablator (bipolar radiofrequency group). Surgical outcomes were compared with those of 35 patients who had the cut-sew technique for the Cox maze III procedure. All patients had concomitant cardiac surgery. Postoperatively, the patients were followed up every 1 to 2 months. Result: At 6 months postoperatively, the conversion rate to regular sinus rhythm was not significantly different between the two groups: 95.0% for the bipolar radiofrequency ablation group; 97.1% for the cut-sew technique (p=1.0). At the end of the follow-up period, the conversion rate to regular sinus rhythm was also not significantly different (92.5% vs. 91.6%, p=1.0). In multivariate analysis using a Cox-regression model, the postoperative atrial dimension was an independent determinant of sinus conversion in the bipolar radiofrequency ablation group (hazard ratio 31, p=0.005). In the Cox-regression model for both groups, atrial fibrillation at 6 months postoperatively (hazard ratio 92.24, p=0.003) and the postoperative left atrial dimension (hazard ratio 16.05, p=0.019) were independent risk factors of continuance or recurrence of atrial fibrillation after Cox maze procedures. Aortic cross-clamp time and cardiopulmonary bypass time were significantly shorter in the radiofrequency group than in the cut-sew group. Conclusion: In the Cox maze procedure for patients with persistent atrial fibrillation, the use of bipolar radiofrequency ablation and a cryoablator is as good as the cut-sew technique for conversion to sinus rhythm. The postoperative left atrial dimension is an independent determinant of postoperative continuance and recurrence of atrial fibrillation.

The Results of Postoperative Radiotherapy for Hypopharyngeal Carcinoma (하인두암 환자에서의 수술 후 방사선치료의 결과)

  • Kim Won Taek;Ki Yong Kan;Nam Ji Ho;Kim Dong Won;Lee Byung Ju;Wang Su Gun;Kyuon Byung Hyun
    • Radiation Oncology Journal
    • /
    • v.22 no.4
    • /
    • pp.254-264
    • /
    • 2004
  • Purpose: This study was carried out to confirm clinical values and limitations of postoperative radiotherapy for hypopharyngeal carcinoma, to evaluate various prognostic factors which may affect to the treatment results and to use these results as fundamental data for making a new treatment strategy. Methods and Materials:. A retrospective analysis was peformed on 64 previously untreated patients with squamous cell carcinoma of the hypopharynx, seen between 1988 and 1999 at Pusan National University Hospital. Most of patients were treated by laryngopharyngectomy and neck dissection followed by conventional fractionated postoperative radiotherapy on surgical bed and cervical nodal areas. Results: The five-year overall survival rate and cause-specific survival rate were 42.2 percent and 51.6 percent, respectively. Univariate analysis of various clinical and pathologic factors confirmed the overall stage, TN-stage, secondary primary cancers, surgical positive margin, nodal extracapsular extension, total radiation doses as significant prognostic factors of hypopharyngeal carcinomas. But in multivariate analysis, TN-stage, surgical positive margin and extracapsular extesion were only statistically significant. Conclusion: In resectable cases of hypopharyngeal carcinoma, combined surgery and postoperative radio-therapy obtained good treatement results, even though sacrificing the function of larynx and pharynx. But in advanced and unresectable cases, with respect to survivals and qualify of life issues, we were able to confirm some limitations of combined therapy. So we recommend that comparative studies of recent various chemo-radiotherapy methods and advanced radiotherapy techniques with these data should be needed.

The Effect of the Optical Points Difference between Finished-Reading Glasses and Dispensing Reading Glasses (완성품 돋보기와 조제가공된 돋보기가 광학적 요소에 미치는 영향)

  • Shim, Young-Cheol;Yoo, Gun-Chang;Kim, In-Suk
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.13 no.3
    • /
    • pp.65-71
    • /
    • 2008
  • Purpose: This paper studied the effect of eyes on the comparison between the distance optical centers problem of dispensing reading glasses made by optician and finished reading glasses in the current market. Methods: The method of this study has been measured by eleven different categories from +1.00D to +4.00D. This study also separated into three groups by their optical frame size and measured optical center point (O.C) and optical center height (O.H) with 200 peoples of man and females over 40 years old without ocular disease living in Gwang-san gu, Gwang-Ju city. Results: As a result, optical center point ranged from 57 mm to 80 mm and it turned to be most common range is from 61 mm to 65 mm (64.6%). Moreover, the optical center height ranged from 1 mm to 8mm and most common ranged (23%) were 4 mm. In other words, finished reading glasses have irregular optical ranges. After observing 200 people who are over 40 years old men and women, result shows that more than 75.5% (151 people) currently use finished reading glasses. Survey of 151 people, most common error between the finished reading glasses's O.C and the wearers P.D were 4 mm (45%). Furthermore, the most common error between the finished reading glassses's O.H and the wearers O.H ranged from 3 mm to 4 mm. Astonishingly, the entire 151 people who wear finished reading glasses appeal that they feel tiredness on their eyes when they wear finished reading glasses. 53 people (35%) claimed that they feel tiredness on their eyes after 10 to 20 minutes wearing finished reading glasses. Base on the research, We conducted more experiment to find the value of prism of optical centers err because it will tell us whether the finished reading glasses are good enough to wear or not. We multiplied diopter by the difference between finished reading glasses's O.C. and wearer's P.D. Consequently, We found out that the finished reading glasses counter to the German RAL-RG 915 policy. And We also found that it is relative to the diopter of lenses. In conclusion, based on the researched that wearing finished reading glasses have a dangerous factor for our vision. Therefore optician must need to recommend correctly made dispensing reading glasses based on the optical center point.

  • PDF

Dynamics of Technology Adoption in Markets Exhibiting Network Effects

  • Hur, Won-Chang
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.127-140
    • /
    • 2010
  • The benefit that a consumer derives from the use of a good often depends on the number of other consumers purchasing the same goods or other compatible items. This property, which is known as network externality, is significant in many IT related industries. Over the past few decades, network externalities have been recognized in the context of physical networks such as the telephone and railroad industries. Today, as many products are provided as a form of system that consists of compatible components, the appreciation of network externality is becoming increasingly important. Network externalities have been extensively studied among economists who have been seeking to explain new phenomena resulting from rapid advancements in ICT (Information and Communication Technology). As a result of these efforts, a new body of theories for 'New Economy' has been proposed. The theoretical bottom-line argument of such theories is that technologies subject to network effects exhibit multiple equilibriums and will finally lock into a monopoly with one standard cornering the entire market. They emphasize that such "tippiness" is a typical characteristic in such networked markets, describing that multiple incompatible technologies rarely coexist and that the switch to a single, leading standard occurs suddenly. Moreover, it is argued that this standardization process is path dependent, and the ultimate outcome is unpredictable. With incomplete information about other actors' preferences, there can be excess inertia, as consumers only moderately favor the change, and hence are themselves insufficiently motivated to start the bandwagon rolling, but would get on it once it did start to roll. This startup problem can prevent the adoption of any standard at all, even if it is preferred by everyone. Conversely, excess momentum is another possible outcome, for example, if a sponsoring firm uses low prices during early periods of diffusion. The aim of this paper is to analyze the dynamics of the adoption process in markets exhibiting network effects by focusing on two factors; switching and agent heterogeneity. Switching is an important factor that should be considered in analyzing the adoption process. An agent's switching invokes switching by other adopters, which brings about a positive feedback process that can significantly complicate the adoption process. Agent heterogeneity also plays a important role in shaping the early development of the adoption process, which has a significant impact on the later development of the process. The effects of these two factors are analyzed by developing an agent-based simulation model. ABM is a computer-based simulation methodology that can offer many advantages over traditional analytical approaches. The model is designed such that agents have diverse preferences regarding technology and are allowed to switch their previous choice. The simulation results showed that the adoption processes in a market exhibiting networks effects are significantly affected by the distribution of agents and the occurrence of switching. In particular, it is found that both weak heterogeneity and strong network effects cause agents to start to switch early and this plays a role of expediting the emergence of 'lock-in.' When network effects are strong, agents are easily affected by changes in early market shares. This causes agents to switch earlier and in turn speeds up the market's tipping. The same effect is found in the case of highly homogeneous agents. When agents are highly homogeneous, the market starts to tip toward one technology rapidly, and its choice is not always consistent with the populations' initial inclination. Increased volatility and faster lock-in increase the possibility that the market will reach an unexpected outcome. The primary contribution of this study is the elucidation of the role of parameters characterizing the market in the development of the lock-in process, and identification of conditions where such unexpected outcomes happen.