• Title/Summary/Keyword: Store Size

Search Result 338, Processing Time 0.026 seconds

Neighbor Caching for P2P Applications in MUlti-hop Wireless Ad Hoc Networks (멀티 홉 무선 애드혹 네트워크에서 P2P 응용을 위한 이웃 캐싱)

  • 조준호;오승택;김재명;이형호;이준원
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.5
    • /
    • pp.631-640
    • /
    • 2003
  • Because of multi-hop wireless communication, P2P applications in ad hoc networks suffer poor performance. We Propose neighbor caching strategy to overcome this shortcoming and show it is more efficient than self caching that nodes store data in theirs own cache individually. A node can extend its caching storage instantaneously with neighbor caching by borrowing the storage from idle neighbors, so overcome multi-hop wireless communications with data source long distance away from itself. We also present the ranking based prediction that selects the most appropriate neighbor which data can be stored in. The node that uses the ranking based prediction can select the neighbor that has high possibility to keep data for a long time and avoid caching the low ranked data. Therefore the ranking based prediction improves the throughput of neighbor caching. In the simulation results, we observe that neighbor caching has better performance, as large as network size, as long as idle time, and as small as cache size. We also show the ranking based prediction is an adaptive algorithm that adjusts times of data movement into the neighbor, so makes neighbor caching flexible according to the idleness of nodes

The Effects of Complex Commercial Facility on the Prices of Nearby Apartments (복합상업시설이 인근 아파트 가격에 미치는 영향)

  • Kim, Yen-Uk;Chun, Hae-Jung
    • Journal of Digital Convergence
    • /
    • v.20 no.3
    • /
    • pp.231-240
    • /
    • 2022
  • This study empirically analyzed the effect of complex commercial facilities on the price of nearby apartments in a Hedonic price model. The spatial range of this study was the walking area of H Department Store located in Pangyo among the second new towns suburb of Seoul, and the time range was 2020. The dependent variable was the real transaction price of the apartment, and independent variable were the characteristics of the housing, the characteristics of the complex, and the characteristics of the region. As a result of the analysis, the area of exclusive use space, the transaction floor, and the highway accessibility had a positive effect on the price of the apartment, and the elapsed year had a negative effect on the price of the apartment. However, the size of the apartment had little effect on apartment prices, and the distance from the complex commercial facilities was shown to be related to apartment prices, indicating that apartment prices declined as it moved away from the complex commercial facilities. Therefore, this is much more influential than the influence of distance from subway stations on apartment price. This confirms that the effect factors of apartment prices and the size of their influence appear differently in the new town area and the existing metropolitan area.

Optimized Hardware Design using Sobel and Median Filters for Lane Detection

  • Lee, Chang-Yong;Kim, Young-Hyung;Lee, Yong-Hwan
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.9 no.1
    • /
    • pp.115-125
    • /
    • 2019
  • In this paper, the image is received from the camera and the lane is sensed. There are various ways to detect lanes. Generally, the method of detecting edges uses a lot of the Sobel edge detection and the Canny edge detection. The minimum use of multiplication and division is used when designing for the hardware configuration. The images are tested using a black box image mounted on the vehicle. Because the top of the image of the used the black box is mostly background, the calculation process is excluded. Also, to speed up, YCbCr is calculated from the image and only the data for the desired color, white and yellow lane, is obtained to detect the lane. The median filter is used to remove noise from images. Intermediate filters excel at noise rejection, but they generally take a long time to compare all values. In this paper, by using addition, the time can be shortened by obtaining and using the result value of the median filter. In case of the Sobel edge detection, the speed is faster and noise sensitive compared to the Canny edge detection. These shortcomings are constructed using complementary algorithms. It also organizes and processes data into parallel processing pipelines. To reduce the size of memory, the system does not use memory to store all data at each step, but stores it using four line buffers. Three line buffers perform mask operations, and one line buffer stores new data at the same time as the operation. Through this work, memory can use six times faster the processing speed and about 33% greater quantity than other methods presented in this paper. The target operating frequency is designed so that the system operates at 50MHz. It is possible to use 2157fps for the images of 640by360 size based on the target operating frequency, 540fps for the HD images and 240fps for the Full HD images, which can be used for most images with 30fps as well as 60fps for the images with 60fps. The maximum operating frequency can be used for larger amounts of the frame processing.

The purified extract of steamed Panax ginseng protects cardiomyocyte from ischemic injury via caveolin-1 phosphorylation-mediating calcium influx

  • Hai-Xia Li;Yan Ma;Yu-Xiao Yan;Xin-Ke Zhai;Meng-Yu Xin;Tian Wang;Dong-Cao Xu;Yu-Tong Song;Chun-Dong Song;Cheng-Xue Pan
    • Journal of Ginseng Research
    • /
    • v.47 no.6
    • /
    • pp.755-765
    • /
    • 2023
  • Background: Caveolin-1, the scaffolding protein of cholesterol-rich invaginations, plays an important role in store-operated Ca2+ influx and its phosphorylation at Tyr14 (p-caveolin-1) is vital to mobilize protection against myocardial ischemia (MI) injury. SOCE, comprising STIM1, ORAI1 and TRPC1, contributes to intracellular Ca2+ ([Ca2+]i) accumulation in cardiomyocytes. The purified extract of steamed Panax ginseng (EPG) attenuated [Ca2+]i overload against MI injury. Thus, the aim of this study was to investigate the possibility of EPG affecting p-caveolin-1 to further mediate SOCE/[Ca2+]i against MI injury in neonatal rat cardiomyocytes and a rat model. Methods: PP2, an inhibitor of p-caveolin-1, was used. Cell viability, [Ca2+]i concentration were analyzed in cardiomyocytes. In rats, myocardial infarct size, pathological damages, apoptosis and cardiac fibrosis were evaluated, p-caveolin-1 and STIM1 were detected by immunofluorescence, and the levels of caveolin-1, STIM1, ORAI1 and TRPC1 were determined by RT-PCR and Western blot. And, release of LDH, cTnI and BNP was measured. Results: EPG, ginsenosides accounting for 57.96%, suppressed release of LDH, cTnI and BNP, and protected cardiomyocytes by inhibiting Ca2+ influx. And, EPG significantly relieved myocardial infarct size, cardiac apoptosis, fibrosis, and ultrastructure abnormality. Moreover, EPG negatively regulated SOCE via increasing p-caveolin-1 protein, decreasing ORAI1 mRNA and protein levels of ORAI1, TRPC1 and STIM1. More importantly, inhibition of the p-caveolin-1 significantly suppressed all of the above cardioprotection of EPG. Conclusions: Caveolin-1 phosphorylation is involved in the protective effects of EPG against MI injury via increasing p-caveolin-1 to negatively regulate SOCE/[Ca2+]i.

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.

A comparative study on the distribution transaction policy between Korea and Japan: focused on unfair transaction behavior prohibition (유통부문에 있어서 경쟁정책의 비교 연구 - 불공정거래행위에 대한 한국과 일본의 대응방식 -)

  • Yoo, Ki-Joon
    • Journal of Distribution Research
    • /
    • v.15 no.5
    • /
    • pp.103-126
    • /
    • 2010
  • The development of an industry including distribution sector is influenced by not only government policy but the related firms' behaviors. Recently the large-scale retailers have had more enormous channel power than any other distributors including monopolistic makers. Now is the time for government to prepare some policies against the unfair transaction behaviors by large-scale retailers. In this paper I tried to inquire into the distribution competition policy from a political correspondent point of view related with the transition of distribution system. For the purpose of this article I compared the case of Korea with Japan. According to the results so far inquired, there are some commons and differences in the cases of the two. Some suggestions are as follows. Considering the predominant position the concept of large-scale retailers is to be extended from a single store to numerous chain stores in the political level. Government needs to examine the standard propriety for large-scale retailer; the size of selling area and amount of sales a year. When a large-scale retailer store is to be established, it need to be taken a permit or a pre-inspection. The Fair Trade Commission have to secure the neutrality from Government's strategies. And government should find out the examples of unfair transaction behavior types and prepare some proper guidelines continually. For the last time statistical data by distributors are to be fitted out and the actual investigations for estimating the effects of government policies need to be enforced.

  • PDF

A Fast Processor Architecture and 2-D Data Scheduling Method to Implement the Lifting Scheme 2-D Discrete Wavelet Transform (리프팅 스킴의 2차원 이산 웨이브릿 변환 하드웨어 구현을 위한 고속 프로세서 구조 및 2차원 데이터 스케줄링 방법)

  • Kim Jong Woog;Chong Jong Wha
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.4 s.334
    • /
    • pp.19-28
    • /
    • 2005
  • In this paper, we proposed a parallel fast 2-D discrete wavelet transform hardware architecture based on lifting scheme. The proposed architecture improved the 2-D processing speed, and reduced internal memory buffer size. The previous lifting scheme based parallel 2-D wavelet transform architectures were consisted with row direction and column direction modules, which were pair of prediction and update filter module. In 2-D wavelet transform, column direction processing used the row direction results, which were not generated in column direction order but in row direction order, so most hardware architecture need internal buffer memory. The proposed architecture focused on the reducing of the internal memory buffer size and the total calculation time. Reducing the total calculation time, we proposed a 4-way data flow scheduling and memory based parallel hardware architecture. The 4-way data flow scheduling can increase the row direction parallel performance, and reduced the initial latency of starting of the row direction calculation. In this hardware architecture, the internal buffer memory didn't used to store the results of the row direction calculation, while it contained intermediate values of column direction calculation. This method is very effective in column direction processing, because the input data of column direction were not generated in column direction order The proposed architecture was implemented with VHDL and Altera Stratix device. The implementation results showed overall calculation time reduced from $N^2/2+\alpha$ to $N^2/4+\beta$, and internal buffer memory size reduced by around $50\%$ of previous works.

Elliptic Curve Cryptography Coprocessors Using Variable Length Finite Field Arithmetic Unit (크기 가변 유한체 연산기를 이용한 타원곡선 암호 프로세서)

  • Lee Dong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.1
    • /
    • pp.57-67
    • /
    • 2005
  • Fast scalar multiplication of points on elliptic curve is important for elliptic curve cryptography applications. In order to vary field sizes depending on security situations, the cryptography coprocessors should support variable length finite field arithmetic units. To determine the effective variable length finite field arithmetic architecture, two well-known curve scalar multiplication algorithms were implemented on FPGA. The affine coordinates algorithm must use a hardware division unit, but the projective coordinates algorithm only uses a fast multiplication unit. The former algorithm needs the division hardware. The latter only requires a multiplication hardware, but it need more space to store intermediate results. To make the division unit versatile, we need to add a feedback signal line at every bit position. We proposed a method to mitigate this problem. For multiplication in projective coordinates implementation, we use a widely used digit serial multiplication hardware, which is simpler to be made versatile. We experimented with our implemented ECC coprocessors using variable length finite field arithmetic unit which has the maximum field size 256. On the clock speed 40 MHz, the scalar multiplication time is 6.0 msec for affine implementation while it is 1.15 msec for projective implementation. As a result of the study, we found that the projective coordinates algorithm which does not use the division hardware was faster than the affine coordinate algorithm. In addition, the memory implementation effectiveness relative to logic implementation will have a large influence on the implementation space requirements of the two algorithms.

Implementation and Performance Measuring of Erasure Coding of Distributed File System (분산 파일시스템의 소거 코딩 구현 및 성능 비교)

  • Kim, Cheiyol;Kim, Youngchul;Kim, Dongoh;Kim, Hongyeon;Kim, Youngkyun;Seo, Daewha
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1515-1527
    • /
    • 2016
  • With the growth of big data, machine learning, and cloud computing, the importance of storage that can store large amounts of unstructured data is growing recently. So the commodity hardware based distributed file systems such as MAHA-FS, GlusterFS, and Ceph file system have received a lot of attention because of their scale-out and low-cost property. For the data fault tolerance, most of these file systems uses replication in the beginning. But as storage size is growing to tens or hundreds of petabytes, the low space efficiency of the replication has been considered as a problem. This paper applied erasure coding data fault tolerance policy to MAHA-FS for high space efficiency and introduces VDelta technique to solve data consistency problem. In this paper, we compares the performance of two file systems, MAHA-FS and GlusterFS. They have different IO processing architecture, the former is server centric and the latter is client centric architecture. We found the erasure coding performance of MAHA-FS is better than GlusterFS.

A Case of Bronchopulmonary Atypical Carcinoid Tumor with Liver Metastasis (간전이를 동반한 폐기관지 비정형 카르시노이드 종양 1예)

  • Lee, Dong Soo;Lee, Tae Won;Kim, Gye Yean;Kim, Hwi Jung;Song, So Hyang;Kim, Seok Chan;Kim, Young Kyoon;Song, Jung Sup;Park, Sung Hak
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.4
    • /
    • pp.623-629
    • /
    • 1996
  • Bronchial carcinoid tumors are uncommon, constituting approximately 5% of all primary lung cancers. Carcinoid tumors belong to the calss of neuroendocrine tumors that consist of cells that can store and secrete neuramines and neuropeptides. Neuroendocrine tumors of the lung include three pathologic types : a low-grade malignancy, the so-called 'typical carcinoid', a more aggressive tumor, the "atypical carcinoid", and the most aggressive malignant neoplasm, the small-cell carcinoma. Atypical carcinoid tumor have a higher malignant potential, is more commonly peripheral than is the typical carcinoid tumor. Histologic features would characterize a carcinoid as hitologically atypical : increased mitotic activity, pleomorphism and irregularity of neuclei with promonent nucleoli, hyperchromatin, and abnormal nuclear-cytoplasmic ratio, areas of increased cellularity with disorganization of architecture, and areas of tumor necrosis. Metastatic involvement of regional lymph nodes and distant organ is common. The prognosis is related to size of the tumor, typical of atypical appearance, endoluminal of extraluminal growth, vascular invasion, node metastasis, Pulmonary resection is the treatement of choice for bronchial carcinoid. We experienced one case of bronchopulmonary atypical carcinoid tumor. In the case, radiologic study showed solitary lung mass with liver metastasis and the level of 5-HIAA was elevated. There was no history of cutaneous flushing, diarrhea, valvular heart disease. The authors reported a case of bronchopulmonary atypical carcinoid tumor with review of literatures.

  • PDF