• Title/Summary/Keyword: 데이터 삽입

Search Result 769, Processing Time 0.022 seconds

An Improvement of Still Image Quality Based on Error Resilient Entropy Coding for Random Error over Wireless Communications (무선 통신상 임의 에러에 대한 에러내성 엔트로피 부호화에 기반한 정지영상의 화질 개선)

  • Kim Jeong-Sig;Lee Keun-Young
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.3 s.309
    • /
    • pp.9-16
    • /
    • 2006
  • Many image and video compression algorithms work by splitting the image into blocks and producing variable-length code bits for each block data. If variable-length code data are transmitted consecutively over error-prone channel without any error protection technique, the receiving decoder cannot decode the stream properly. So the standard image and video compression algorithms insert some redundant information into the stream to provide some protection against channel errors. One of redundancies is resynchronization marker, which enables the decoder to restart the decoding process from a known state in the event of transmission errors, but its usage should be restricted not to consume bandwidth too much. The Error Resilient Entropy Code(EREC) is well blown method which can regain synchronization without any redundant information. It can work with the overall prefix codes, which many image compression methods use. This paper proposes EREREC method to improve FEREC(Fast Error-Resilient Entropy Coding). It first calculates initial searching position according to bit lengths of consecutive blocks. Second, initial offset is decided using statistical distribution of long and short blocks, and initial offset can be adjusted to insure all offset sequence values can be used. The proposed EREREC algorithm can speed up the construction of FEREC slots, and can improve the compressed image quality in the event of transmission errors. The simulation result shows that the quality of transmitted image is enhanced about $0.3{\sim}3.5dB$ compared with the existing FEREC when random channel error happens.

PAPR Reduction Method for the Nonlinear Distortion in the Multicode CDMA System (멀티코드 CDMA 시스템에서 비선형 왜곡에 대처하는 PAPR 저감 기법)

  • Kim Sang-Woo;Kim Namil;Kim Sun-Ae;Suh Jae-Won;Ryu Heung-Cyoon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.16 no.12 s.103
    • /
    • pp.1171-1178
    • /
    • 2005
  • Multi-code code division multiple access(MC-CDMA) has been proposed for providing the various service rates with different quality of service requirement by assigning multiple codes and increasing the capacity. However, it suffers from the serious problem of high peak to average power ratio(PAPR). So, it requires large input back-off, which causes poor power consumption in high power amplifier(HPA). In this paper, we propose a new method that can reduce PAPR efficiently by constraint codes based on the opposite correlation to the incoming information data in MC-CDMA. PAPR reduction depends on the length and indices of constraint codes in MC-CDMA system. There is a trade-off between PAPR reduction and the length of constraint codes. From the simulation results, we also investigate the BER improvement in AWGN channel with HPA. The simulation results show that BER performance can be similar with linear amplifier in two cases: 1) Using exact constraint codes without input back-off and 2) a few constraint codes with small input back-off.

OFDM Communication System Using the Additive Control Tone for PAPR Reduction (PAPR 저감을 위하여 부가 Control 톤을 이용하는 OFDM 통신 시스템)

  • Kim Jin-Kwan;Lee Ill-Jin;Ryu Heung-Gyoon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.16 no.12 s.103
    • /
    • pp.1229-1238
    • /
    • 2005
  • OFDM(Orthogonal Frequency Division Multiplexing) communications system is very attractive for the high data rate wireless transmission. However, it may be distorted in the nonlinear HPA(High Power Amplifier) since OFDM signal has hish PAPR(Peak-to-Average Power Ratio). In this paper, a new method using control tone is studied for reducing the PAPR and we call it PCT(PAPR Control Tone) method. This proposed PCT method is to assign control tones for PAPR reduction at the predefined sub-carriers. After IFFT(Inverse Fast Fourier Transform) and PAPR calculation, the OFDM data signal of the lowest PAPR is selected to transmit. Unlike the conventional method, it can cut down the computational complexity because it does not require the transmission and demodulation process of side information about the phase rotation. Furthermore, if this method is made up in parallel configuration, it can solve the time delay problem so that it can be processed in real time processing. This proposed method is compared with the conventional selected mapping(SLM) technique. We find out the PAPR reduction performance and BER when the number of control tone is 6 and nonlinear HPA is considered.

A Method of Integrating Scan Data for 3D Face Modeling (3차원 얼굴 모델링을 위한 스캔 데이터의 통합 방법)

  • Yoon, Jin-Sung;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.6
    • /
    • pp.43-57
    • /
    • 2009
  • Integrating 3D data acquired in multiple views is one of the most important techniques in 3D modeling. However, the existing integration methods are sensitive to registration errors and surface scanning noise. In this paper, we propose a integration algorithm using the local surface topology. We first find all boundary vertex pairs satisfying a prescribed geometric condition in the areas between neighboring surfaces, and then separates areas to several regions by using boundary vertex pairs. We next compute best fitting planes suitable to each regions through PCA(Principal Component Analysis). They are used to produce triangles that be inserted into empty areas between neighboring surfaces. Since each regions between neighboring surfaces can be integrated by using local surface topology, a proposed method is robust to registration errors and surface scanning noise. We also propose a method integrating of textures by using parameterization technique. We first transforms integrated surface into initial viewpoints of each surfaces. We then project each textures to transformed integrated surface. They will be then assigned into parameter domain for integrated surface and be integrated according to the seaming lines for surfaces. Experimental results show that the proposed method is efficient to face modeling.

Annotation-guided Code Partitioning Compiler for Homomorphic Encryption Program (지시문을 활용한 동형암호 프로그램 코드 분할 컴파일러)

  • Dongkwan Kim;Yongwoo Lee;Seonyoung Cheon;Heelim Choi;Jaeho Lee;Hoyun Youm;Hanjun Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.7
    • /
    • pp.291-298
    • /
    • 2024
  • Despite its wide application, cloud computing raises privacy leakage concerns because users should send their private data to the cloud. Homomorphic encryption (HE) can resolve the concerns by allowing cloud servers to compute on encrypted data without decryption. However, due to the huge computation overhead of HE, simply executing an entire cloud program with HE causes significant computation. Manually partitioning the program and applying HE only to the partitioned program for the cloud can reduce the computation overhead. However, the manual code partitioning and HE-transformation are time-consuming and error-prone. This work proposes a new homomorphic encryption enabled annotation-guided code partitioning compiler, called Heapa, for privacy preserving cloud computing. Heapa allows programmers to annotate a program about the code region for cloud computing. Then, Heapa analyzes the annotated program, makes a partition plan with a variable list that requires communication and encryption, and generates a homomorphic encryptionenabled partitioned programs. Moreover, Heapa provides not only two region-level partitioning annotations, but also two instruction-level annotations, thus enabling a fine-grained partitioning and achieving better performance. For six machine learning and deep learning applications, Heapa achieves a 3.61 times geomean performance speedup compared to the non-partitioned cloud computing scheme.

X-tree Diff: An Efficient Change Detection Algorithm for Tree-structured Data (X-tree Diff: 트리 기반 데이터를 위한 효율적인 변화 탐지 알고리즘)

  • Lee, Suk-Kyoon;Kim, Dong-Ah
    • The KIPS Transactions:PartC
    • /
    • v.10C no.6
    • /
    • pp.683-694
    • /
    • 2003
  • We present X-tree Diff, a change detection algorithm for tree-structured data. Our work is motivated by need to monitor massive volume of web documents and detect suspicious changes, called defacement attack on web sites. From this context, our algorithm should be very efficient in speed and use of memory space. X-tree Diff uses a special ordered labeled tree, X-tree, to represent XML/HTML documents. X-tree nodes have a special field, tMD, which stores a 128-bit hash value representing the structure and data of subtrees, so match identical subtrees form the old and new versions. During this process, X-tree Diff uses the Rule of Delaying Ambiguous Matchings, implying that it perform exact matching where a node in the old version has one-to one corrspondence with the corresponding node in the new, by delaying all the others. It drastically reduces the possibility of wrong matchings. X-tree Diff propagates such exact matchings upwards in Step 2, and obtain more matchings downwsards from roots in Step 3. In step 4, nodes to ve inserted or deleted are decided, We aldo show thst X-tree Diff runs on O(n), woere n is the number of noses in X-trees, in worst case as well as in average case, This result is even better than that of BULD Diff algorithm, which is O(n log(n)) in worst case, We experimented X-tree Diff on reat data, which are about 11,000 home pages from about 20 wev sites, instead of synthetic documets manipulated for experimented for ex[erimentation. Currently, X-treeDiff algorithm is being used in a commeercial hacking detection system, called the WIDS(Web-Document Intrusion Detection System), which is to find changes occured in registered websites, and report suspicious changes to users.

Effect of Patient Size on Image Quality and Dose Reduction after Added Filtration in Digital Chest Tomosynthesis (부가필터를 적용한 디지털 흉부단층합성검사에서 환자 체형에 따른 화질 평가와 선량감소 효과)

  • Bok, Geun-Seong;Kim, Sang-Hyun
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.1
    • /
    • pp.23-30
    • /
    • 2018
  • To evaluate the effect of patient size on effective dose and image quality for Digital Chest Tomosynthesis(DTS) using additional 0.3 mm copper filtration. Eighty artificial nodules were placed in the thorax phantom("Lungman," Kyoto Kagaku, Japan), and Digital Chest Tomosynthesis(DTS) images of the phantom were acquired both with and without added 0.3 mm Cu filtration. To simulate patients of three sizes: small, average size and oversize, one or two 20-mm-thick layer of PMMA(polymethyl methacrylatek) blocks were placed on the phantom. The Effective dose was calculated using Monte Carlo simulations. Two evaluations of image quality methods have been employed. Three readers counted the number of nodules detected in the lung, and the measured contrast-to-noise ratios(CNRs) were used. Data were analyzed statistically. The ED reduced $26{\mu}Sv$ in a phantom, $33{\mu}Sv$ in one 20-mm-thick layer of PMMA block placed on the phantom, and $48{\mu}Sv$ in two 20-mm-thick layer of PMMA blocks placed on the phantom. The Effective dose(ED) differences between DTS with and without filtration were significant(p<0.05). In particular, when we used two 20-mm-thick layer of PMMA blocks placed on the phantom, the ED was significantly reduced by 36% compared with those without additional filtration. Nodule detection sensitivities were not different between with and without added filtration. Differences of CNRs were statistically insignificant(p>0.05). Use of additional filtration allows a considerable dose reduction during Digital Chest Tomosynthesis(DTS) without loss of image quality. In particular, additional filtration showed outstanding result for effective dose reduction on two 20-mm-thick layer of PMMA blocks placed on the phantom. It applies to overweight patients.

Early and Midterm Results of Hybrid Endovascular Repair for Thoracic Aortic Disease (흉부대동맥 질환에서 시행된 하이브리드 혈관내 성형술의 중단기 성적)

  • Youn, Young-Nam;Kim, Kwan-Wook;Hong, Soon-Chang;Lee, Sak;Chang, Byung-Chul;Song, Seung-Jun
    • Journal of Chest Surgery
    • /
    • v.43 no.5
    • /
    • pp.490-498
    • /
    • 2010
  • Background: A hybrid procedure using an open surgical extra-anatomic bypass of aortic arch vessels and thoracic endovascular aortic repair (TEVAR) is less invasive than open surgery, and provides a suitable proximal landing zone. Here we report our experience with a hybrid TEVAR procedure at a single center. Material and Method: We retrospectively reviewed consecutive patients with thoracic aortic disease who received a hybrid TEVAR procedure between August 2008 and January 2010. Patients' data were prospectively collected and mean follow-up was $10.8{\pm}5.5$ months (range 3~20). Result: Nine patients (7 males and 2 females) with a mean age of $63.8{\pm}15.8$ years (range 38~84) underwent a hybrid procedure. Five patients had an arch or a proximal descending aortic aneurysm, two had a dissecting aneurysm of the descending aorta, and two had an aneurysm of the ascending arch and descending aorta. Mean expected mortality calculated by logistic EuroSCORE was 21%. Six patients underwent debranching and rerouting from ascending aorta to arch vessels, 2 had carotid-carotid bypass grafting, and 1 underwent carotid-axillary bypass grafting. Mean operation time was $221.4{\pm}84.0$ min (range 94~364). Deployment success of endovascular stent grafting was 100% with no endoleak on completion angiography. There was no mortality, and a small embolism in the branch of the right opthalmic artery in one patient. During follow-up, one intervention was required for the endoleak. Actuarial survival at 20 months was 100%. Conclusion: Early and mid-term results are encouraging and suggest that hybrid TEVAR procedures are less invasive and safer and represent an effective technique for treating thoracic aortic disease.

A Study on Termite Monitoring Method Using Magnetic Sensors and IoT(Internet of Things) (자력센서와 IoT(사물인터넷)를 활용한 흰개미 모니터링 방법 연구)

  • Go, Hyeongsun;Choe, Byunghak
    • Korean Journal of Heritage: History & Science
    • /
    • v.54 no.1
    • /
    • pp.206-219
    • /
    • 2021
  • The warming of the climate is increasing the damage caused by termites to wooden buildings, cultural properties and houses. A group removal system can be installed around the building to detect and remove termite damage; however, if the site is not visited regularly, every one to two months, you cannot observe whether termites have spread within, and it is difficult to take prompt effective action. In addition, since the system is installed and operated in an exposed state for a long period of time, it may be ineffective or damaged, resulting in a loss of function. Furthermore if the system is installed near a cultural site, it may affect the aesthetic environment of the site. In this study, we created a detection system that uses wood, cellulose, magnets, and magnetic sensors to determine whether termites have entered the area. The data was then transferred to a low power LoRa Network which displayed the results without the necessity of visiting the site. The wood was made in the shape of a pile, and holes were made from the top to the bottom to make it easier for termites to enter and produce a cellulose sample. The cellulose sample was made in a cylindrical shape with a magnet wrapped in cellulose and inserted into the top of a hole in the wood. Then, the upper part of the wood pile was covered with a stopper to prevent foreign matter from entering. It also served to block external factors such as light and rainfall, and to create an environment where termites could add cellulose samples. When the cellulose was added by the termites, a space was created around the magnet, causing the magnet to either fall or tilt. The magnetic sensor inside the stopper was fixed on the top of the cellulose sample and measured the change in the distance between the magnet and the sensor according to the movement of the magnet. In outdoor experiments, 11 cellulose samples were inserted into the wood detection system and the termite inflow was confirmed through the movement of the magnet without visiting the site within 5 to 17 days. When making further improvements to the function and operation of the system it in the future, it is possible to confirm that termites have invaded without visiting the site. Then it is also possible to reduce damage and fruiting due to product exposure, and which would improve the condition and appearance of cultural properties.

Design and Fabrication of on Oscillator with Low Phase Noise Characteristic using a Phase Locked Loop (위상고정루프를 이용한 낮은 위상 잡음 특성을 갖는 발진기 설계 및 제작)

  • Park, Chang-Hyun;Kim, Jang-Gu;Choi, Byung-Ha
    • Journal of Navigation and Port Research
    • /
    • v.30 no.10 s.116
    • /
    • pp.847-853
    • /
    • 2006
  • In this paper, we designed VCO(voltage controlled oscillator} that is composed of a dielectric resonator and a varactor diode, and the PLDRO(phase locked dielectric resonator oscillator) that is combined with the sampling phase detector and loop filter. The results at 12.05 GHz show the output power is 13.54 dBm frequency tuning range approximately +/- 7.5 MHz, and power variation over the tuning range less than 0.2 dB, respectively. The phase noise which effects on bits error rate in digital communication is obtained with -114.5 dBc/Hz at 100 kHz offset from carrier, and The second harmonic suppression is less than -41.49 dBc. These measured results are found to be more improved than those of VCO without adopting PLL, and the phase noise and power variation performance characteristics show the better performances than those of conventional PLL.