Bandwidth

Distribution by Scientific Domains

Kinds of Bandwidth

  • adjustable bandwidth
  • antenna bandwidth
  • broad bandwidth
  • effective bandwidth
  • fractional bandwidth
  • frequency bandwidth
  • impedance bandwidth
  • mhz bandwidth
  • narrow bandwidth
  • network bandwidth
  • operating bandwidth
  • optimal bandwidth
  • ratio bandwidth
  • spectral bandwidth
  • sufficient bandwidth
  • transmission bandwidth

  • Terms modified by Bandwidth

  • bandwidth allocation
  • bandwidth efficiency
  • bandwidth performance
  • bandwidth request
  • bandwidth requirement
  • bandwidth utilization

  • Selected Abstracts


    Transcoding media for bandwidth constrained mobile devices

    INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 2 2005
    Kevin Curran
    Bandwidth is an important consideration when dealing with streaming media. More bandwidth is required for complex data such as video as opposed to a simple audio file. When delivering streaming media, sufficient bandwidth is required to achieve an acceptable level of performance. If the information streamed exceeds the bandwidth capacity of the client the result will be ,choppy' and incomplete with possible loss of transmission. Transcoding typically refers to the adaptation of streaming content. Typical transcoding scenarios exploit content-negotiation to negotiate between different formats in order to obtain the most optimal combination of requested quality and available resources. It is possible to transcode media to a lesser quality or size upon encountering adverse bandwidth conditions. This can be accomplished without the need to encode multiple versions of the same file at differing quality levels. This study investigates the capability of transcoding for coping with restrictions in client devices. In addition, the properties of transcoded media files are examined and evaluated to determine their applicability for streaming in relation to a range of broad device types capable of receiving streaming media.,Copyright © 2005 John Wiley & Sons, Ltd. [source]


    A survey on vertex coloring problems

    INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 1 2010
    Enrico Malaguti
    Abstract This paper surveys the most important algorithmic and computational results on the Vertex Coloring Problem (VCP) and its generalizations. The first part of the paper introduces the classical models for the VCP, and discusses how these models can be used and possibly strengthened to derive exact and heuristic algorithms for the problem. Computational results on the best performing algorithms proposed in the literature are reported. The second part of the paper is devoted to some generalizations of the problem, which are obtained by considering additional constraints [Bandwidth (Multi) Coloring Problem, Bounded Vertex Coloring Problem] or an objective function with a special structure (Weighted Vertex Coloring Problem). The extension of the models for the classical VCP to the considered problems and the best performing algorithms from the literature, as well as the corresponding computational results, are reported. [source]


    Robust Automatic Bandwidth for Long Memory

    JOURNAL OF TIME SERIES ANALYSIS, Issue 3 2001
    Marc Henry
    The choice of bandwidth, or number of harmonic frequencies, is crucial to semiparametric estimation of long memory in a covariance stationary time series as it determines the rate of convergence of the estimate, and a suitable choice can insure robustness to some non-standard error specifications, such as (possibly long-memory) conditional heteroscedasticity. This paper considers mean squared error minimizing bandwidths proposed in the literature for the local Whittle, the averaged periodogram and the log periodogram estimates of long memory. Robustness of these optimal bandwidth formulae to conditional heteroscedasticity of general form in the errors is considered. Feasible approximations to the optimal bandwidths are assessed in an extensive Monte Carlo study that provides a good basis for comparison of the above-mentioned estimates with automatic bandwidth selection. [source]


    Behaviour-based multiplayer collaborative interaction management

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2006
    Qingping Lin
    Abstract A collaborative virtual environment (CVE) allows geographically dispersed users to interact with each other and objects in a common virtual environment via network connections. One of the successful applications of CVE is multiplayer on-line role-playing game. To support massive interactions among virtual entities in a large-scale CVE and maintain consistent status of the interaction among users with the constraint of limited network bandwidth, an efficient collaborative interaction management method is required. In this paper, we propose a behaviour-based interaction management framework for supporting multiplayer role-playing CVE applications. It incorporates a two-tiered architecture which includes high-level role behaviour-based interaction management and low-level message routing. In the high level, interaction management is achieved by enabling interactions based on collaborative behaviour definitions. In the low level, message routing controls interactions according to the run-time status of the interactive entities. Collaborative Behaviour Description Language is designed as a scripting interface for application developers to define collaborative behaviours of interactive entities and simulation logics/game rules in a CVE. We demonstrate and evaluate the performance of the proposed framework through a prototype system and simulations. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    A measure for mesh compression of time-variant geometry

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2004
    Prasun Mathur
    Abstract We present a novel measure for compression of time-variant geometry. Compression of time-variant geometry has become increasingly relevant as transmission of high quality geometry streams is severely limited by network bandwidth. Some work has been done on such compression schemes, but none of them give a measure for prioritizing the loss of information from the geometry stream while doing a lossy compression. In this paper we introduce a cost function which assigns a cost to the removal of particular geometric primitives during compression, based upon their importance in preserving the complete animation. We demonstrate that the use of this measure visibly enhances the performance of existing compression schemes. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Effective page refresh policy

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2007
    Kai Gao
    Abstract Web pages are created or updated randomly. As for a search engine, keeping up with the evolving Web is necessary. But previous studies have shown the crawler's refresh ability is limited because it is not easy to detect the change instantly, especially when the resources are limited. This article concerns modeling on an effective Web page refresh policy and finding the refresh interval with minimum total waiting time. The major concern is how to model the change and which part should be updated more often. Toward this goal, the Poisson process is used to model the process. Further, the relevance is also used to adjust the process, and the probability on some sites is higher than others so these sites will be given more opportunities to be updated. It is essential when the bandwidth is not wide enough or the resource is limited. The experimental results validate the feasibility of the approach. On the basis of the above works, an educational search engine has been developed. © 2007 Wiley Periodicals, Inc. Comput Appl Eng Educ 14: 240,247, 2007; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20155 [source]


    Kinematics, Dynamics, Biomechanics: Evolution of Autonomy in Game Animation

    COMPUTER GRAPHICS FORUM, Issue 3 2005
    Steve Collins
    The believeable portrayal of character performances is critical in engaging the immersed player in interactive entertainment. The story, the emotion and the relationship between the player and the world they are interacting within are hugely dependent on how appropriately the world's characters look, move and behave. We're concerned here with the character's motion; with next generation game consoles like Xbox360TM and Playstation®3 the graphical representation of characters will take a major step forward which places even more emphasis on the motion of the character. The behavior of the character is driven by story and design which are adapted to game context by the game's AI system. The motion of the characters populating the game's world, however, is evolving to an interesting blend of kinematics, dynamics, biomechanics and AI drivenmotion planning. Our goal here is to present the technologies involved in creating what are essentially character automata, emotionless and largely brainless character shells that nevertheless exhibit enough "behavior" to move as directed while adapting to the environment through sensing and actuating responses. This abstracts the complexities of low level motion control, dynamics, collision detection etc. and allows the game's artificial intelligence system to direct these characters at a higher level. While much research has already been conducted in this area and some great results have been published, we will present the particular issues that face game developers working on current and next generation consoles, and how these technologies may be integrated into game production pipelines so to facilitate the creation of character performances in games. The challenges posed by the limited memory and CPU bandwidth (though this is changing somewhat with next generation) and the challenges of integrating these solutions with current game design approaches leads to some interesting problems, some of which the industry has solutions for and some others which still remain largely unsolved. [source]


    Mobile Agent Computing Paradigm for Building a Flexible Structural Health Monitoring Sensor Network

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 7 2010
    Bo Chen
    While sensor network approach is a feasible solution for structural health monitoring, the design of wireless sensor networks presents a number of challenges, such as adaptability and the limited communication bandwidth. To address these challenges, we explore the mobile agent approach to enhance the flexibility and reduce raw data transmission in wireless structural health monitoring sensor networks. An integrated wireless sensor network consisting of a mobile agent-based network middleware and distributed high computational power sensor nodes is developed. These embedded computer-based high computational power sensor nodes include Linux operating system, integrate with open source numerical libraries, and connect to multimodality sensors to support both active and passive sensing. The mobile agent middleware is built on a mobile agent system called Mobile-C. The mobile agent middleware allows a sensor network moving computational programs to the data source. With mobile agent middleware, a sensor network is able to adopt newly developed diagnosis algorithms and make adjustment in response to operational or task changes. The presented mobile agent approach has been validated for structural damage diagnosis using a scaled steel bridge. [source]


    NMR and the uncertainty principle: How to and how not to interpret homogeneous line broadening and pulse nonselectivity.

    CONCEPTS IN MAGNETIC RESONANCE, Issue 4 2008

    Abstract Following the treatments presented in Parts I and II, I herein discuss in more detail the popular notion that the frequency of a monochromatic RF pulse as well as that of a monochromatic FID is "in effect" uncertain due to the (Heisenberg) Uncertainty Principle, which also manifests itself in the fact that the FT-spectrum of these temporal entities is spread over a nonzero frequency band. In Part III, I continue my preliminary review of some further fundamental concepts, such as the Heisenberg and Fourier Uncertainty Principles, that are needed to understand whether or not the NMR linewidth and the RF excitation bandwidth have anything to do with "uncertainty". The article then culminates in re-addressing our Two NMR Problems in a more conscientious frame of mind by using a more refined formalism. The correct interpretation of these problems will be discussed in Part IV. © 2008 Wiley Periodicals, Inc. Concepts Magn Reson Part A 32A: 302,325, 2008. [source]


    Maximizing revenue in Grid markets using an economically enhanced resource manager

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2010
    M. Macías
    Abstract Traditional resource management has had as its main objective the optimization of throughput, based on parameters such as CPU, memory, and network bandwidth. With the appearance of Grid markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The Self-organizing ICT Resource Management (SORMA) project aims at allowing resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA's motivation is to achieve efficient resource utilization by maximizing revenue for resource providers and minimizing the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that the desired quality of service levels meet the expectations of market participants. This paper explains the proposed use of an economically enhanced resource manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximization across multiple service level agreements and provides an application scenario to demonstrate its usefulness and effectiveness. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Towards virtualized desktop environment

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2010
    Xiaofei Liao
    Abstract Virtualization is being widely used now as an emerging trend. Rapid improvements in network bandwidth, ubiquitous security hazards and high total cost of ownership of personal computers have created a growing market for desktop virtualization. Much like server virtualization, virtualizing desktops involves separating the physical location of a client device from its logical interface. But, the performance and usability of some traditional desktop frameworks do not satisfy end-users. Other solutions, including WebOS, which needs to rebuild all daily-used applications into Client/Server mode, cannot be easily accepted by people in a short time. We present LVD, a system that combines the virtualization technology and inexpensive personal computers (PCs) to realize a lightweight virtual desktop system. Comparing to the previous desktop systems, LVD builds an integrated novel desktop environment, which can support the backup, mobility, suspending and resuming of per-user's working environment, and support synchronous using of incompatible applications on different platforms and achieves great saving in power consumption. We have implemented LVD in a cluster with Xen and compared its performance against widely used commercial approaches, including Microsoft RDP, Citrix MetaFrameXP and Sun Ray. Experimental results demonstrate that LVD is effective in performing the functions while imposing little overhead. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    The Neutralizer: a self-configurable failure detector for minimizing distributed storage maintenance cost

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2009
    Zhi Yang
    Abstract To achieve high data availability or reliability in an efficient manner, distributed storage systems must detect whether an observed node failure is permanent or transient, and if necessary, generate replicas to restore the desired level of replication. Given the unpredictability of network dynamics, however, distinguishing permanent and transient failures is extremely difficult. Though timeout-based detectors can be used to avoid mistaking transient failures as permanent failures, it is unknown how the timeout values should be selected to achieve a better tradeoff between detection latency and accuracy. In this paper, we address this fundamental tradeoff from several perspectives. First, we explore the impact of different timeout values on maintenance cost by examining the probability of their false positives and false negatives. Second, we propose a self-configurable failure detector called the Neutralizer based on the idea of counteracting false positives with false negatives. The Neutralizer could enable the system to maintain a desired replication level on average with the least amount of bandwidth. We conduct extensive simulations using real trace data from a widely deployed peer-to-peer system and synthetic traces based on PlanetLab and Microsoft PCs, showing a significant reduction in aggregate bandwidth usage after applying the Neutralizer (especially in an environment with a low average node availability). Overall, we demonstrate that the Neutralizer closely approximates the performance of a perfect ,oracle' detector in many cases. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    OPTICAL/DIGITAL CHROMOENDOSCOPY DURING COLONOSCOPY USING NARROW-BAND IMAGING SYSTEM

    DIGESTIVE ENDOSCOPY, Issue 2005
    Yasushi Sano
    This review is regarding the narrow-band imaging (NBI) system which has been developed at National Cancer Center Hospital East, Japan. The technology of the NBI system is based on modifying the spectral features by narrowing the bandwidth of spectral transmittance using various optical filters. The NBI system consists of three filters, 415,30 nm, 445,30 nm, and 500,30 nm, which are used as observing the fine capillaries in the superficial mucosa is essential to identify gastrointestinal neoplasms. The NBI system has been in development since 1999 and the first report of it's efficacy for gastrointestinal tract use was reported in 2001. In our pilot study, the NBI system may be sufficient to differentiate hyperplastic polyp from adenomatous polyp, and to visualize neoplasia with image processing in real-time during colonoscopy without the need for dye spraying. Herein, we propose the term ,optical/digital chromoendoscopy' using the NBI system and hope that this instrument will become standard endoscopy for in the 21st century. To estimate the feasibility and efficacy of using the NBI system for surveillance or screening examination, randomized control trials should be conducted in the future. [source]


    A seismic retrofit method by connecting viscous dampers for microelectronics factories

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 11 2007
    Jenn-Shin Hwang
    Abstract The implementation of viscous dampers to microelectronics factories has been previously proved not to affect the micro-vibration of the factories in operation so that the vibration-sensitive manufacturing process will not be interfered. Therefore, a seismic retrofit strategy which employs the viscous dampers installed in between the exterior and interior structures of the ,fab' structure is proposed in the study. The design formulas corresponding to the proposed retrofit method are derived using the non-proportional damping theory. Based on the study, it is found that the added damping ratio to the fab structure depends greatly on the frequency ratio of the two structures in addition to the damping coefficients of the added dampers. Outside the bandwidth of the frequency ratio in which the added damping ratio is very sensitive to the variation of the frequency ratio, the added damping ratio can be well captured using the classical damping theory. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Direct on-line analysis of neutral analytes by dual sweeping via complexation and organic solvent field enhancement in nonionic MEKC

    ELECTROPHORESIS, Issue 8 2009
    Jun Cao
    Abstract Conventionally, neutral compounds cannot be separated by nonionic micelle capillary electrophoresis. In this report, the development of a novel on-line preconcentration technique combining dual sweeping based on complexation and organic solvent field enhancement is applied to the sensitive and selective analysis of three neutral glucosides: ginsenoside Rf, ginsenoside Rg1, and ginsenoside Re. Nonionic micelle detectability by CE is demonstrated through effective focusing of large sample volumes (up to 38% capillary length) using a dual sweeping mode. This results in a 50- to 130-fold improvement in the LODs relative to conventional injection method. Neutral compounds sweeping is examined in terms of analyte mobility dependence on borate complexation, solvent viscosity difference, and Brij-35 interaction. Enhanced focusing performance by this hyphenated method was demonstrated by a greater than fourfold reduction in glucoside bandwidth, as compared with common sweeping (devoid of organic solvent-mediated sweeping method in the sample matrices). Moreover, separation efficiencies greater than a million theoretical plates can be achieved by sweeping large sample volumes into narrow zones. The designated method was also tested for its ability to determine the presence of glucosides in the crude extracts obtained from plant sample. [source]


    Numerical modeling of the Joule heating effect on electrokinetic flow focusing

    ELECTROPHORESIS, Issue 10 2006
    Kuan-Da Huang
    Abstract In electrokinetically driven microfluidic systems, the driving voltage applied during operation tends to induce a Joule heating effect in the buffer solution. This heat source alters the solution's characteristics and changes both the electrical potential field and the velocity field during the transport process. This study performs a series of numerical simulations to investigate the Joule heating effect and analyzes its influence on the electrokinetic focusing performance. The results indicate that the Joule heating effect causes the diffusion coefficient of the sample to increase, the potential distribution to change, and the flow velocity field to adopt a nonuniform profile. These variations are particularly pronounced under tighter focusing conditions and at higher applied electrical intensities. In numerical investigations, it is found that the focused bandwidth broadens because thermal diffusion effect is enhanced by Joule heating. The variation in the potential distribution induces a nonuniform flow field and causes the focused bandwidth to tighten and broaden alternately as a result of the convex and concave velocity flow profiles, respectively. The present results confirm that the Joule heating effect exerts a considerable influence on the electrokinetic focusing ratio. [source]


    Kernel estimates of hazard functions for carcinoma data sets

    ENVIRONMETRICS, Issue 3 2006
    Ivana Horová
    Abstract The present article focuses on kernel estimates of hazard functions and their derivatives. Our approach is based on the model introduced by Müller and Wang (1990). In order to estimate the hazard function in an effective manner an automatic procedure in a paper by Horová et al. (2002) is applied. The procedure chooses a bandwidth, a kernel and an order of a kernel. As a by-product we propose a special procedure for the estimation of the optimal bandwidth. This is applied to the carcinoma data sets kindly provided by the Masaryk Memorial Cancer Institute in Brno. Attention is also paid to the points of the most rapid change of the hazard function. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Directional responses of visual wulst neurones to grating and plaid patterns in the awake owl

    EUROPEAN JOURNAL OF NEUROSCIENCE, Issue 7 2007
    Jerome Baron
    Abstract The avian retinothalamofugal pathway reaches the telencephalon in an area known as visual wulst. A close functional analogy between this area and the early visual cortex of mammals has been established in owls. The goal of the present study was to assess quantitatively the directional selectivity and motion integration capability of visual wulst neurones, aspects that have not been previously investigated. We recorded extracellularly from a total of 101 cells in awake burrowing owls. From this sample, 88% of the units exhibited modulated directional responses to sinusoidal gratings, with a mean direction index of 0.74 ± 0.03 and tuning bandwidth of 28 ± 1.16°. A direction index higher than 0.5 was observed in 66% of the cells, thereby qualifying them as direction selective. Motion integration was tested with moving plaids, made by adding two sinusoidal gratings of different orientations. We found that 80% of direction-selective cells responded optimally to the motion direction of the component gratings, whereas none responded to the global motion of plaids, whose direction was intermediate to that of the gratings. The remaining 20% were unclassifiable. The strength of component motion selectivity rapidly increased over a 200 ms period following stimulus onset, maintaining a relatively sustained profile thereafter. Overall, our data suggest that, as in the mammalian primary visual cortex, the visual wulst neurones of owls signal the local orientated features of a moving object. How and where these potentially ambiguous signals are integrated in the owl brain might be important for understanding the mechanisms underlying global motion perception. [source]


    Understanding the partial discharge activity of conducting particles in GIS under DC voltages using the UHF technique

    EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 5 2010
    R. Sarathi
    Abstract The major cause of failure of DC-GIS is due to presence of foreign particles causing partial discharges in the insulation structure. The particle movement in gas insulated system (GIS) radiates electromagnetic waves and the bandwidth of the signal lies in the range 1,2,GHz. Increase in applied DC voltage/pressure has not altered the frequency content of the ultra high frequency (UHF) signal generated due to partial discharge formed by particle movement. The UHF sensor could recognize the breakdown of sulfur-hexa-fluoride (SF6) gas under DC and Lightning impulse voltages and the frequency content of the signal captured by the UHF sensor lies up to 500,MHz. Mounting UHF sensor in GIS could allow one to classify internal partial discharges from breakdown, at the time of testing/during operation. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    A new pst -weighting filter for the flickermeter in the frequency domain

    EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 1 2001
    H. Amarís
    This paper presents a new filter for the Flickermeter in the frequency domain that allows a fast estimation of the Pst short-time flicker parameter. The power spectrum of the signal is weighted with the new Pst -weighting filter and the Pst parameter may be deduced directly. The Flickermeter structure has been tested with the IEC-Flickermeter recommendations and it presents a good behaviour in the whole bandwidth of O Hz to 35 Hz even in very low frequencies. The filter model has been deduced by using identification tools in the frequency domain. The advantages of the Flickermeter methodology in the frequency domain are studied. [source]


    Experiments on space diversity effect in MIMO channel transmission with maximum data rate of 1,Gbps in downlink OFDM radio access

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 6 2006
    Hidekazu Taoka
    This paper presents experimental results on the space diversity effect in MIMO multiplexing/diversity with the target data rate up to 1,Gbps using OFDM radio access based on laboratory and field experiments including realistic impairments using the implemented MIMO transceivers with the maximum of four transmitter/receiver branches. The experimental results using multipath fading simulators show that at the frequency efficiency of less than approximately 2,bits/second/Hz, MIMO diversity using the space-time block code (STBC) increases the measured throughput compared to MIMO multiplexing owing to the high transmission space diversity effect. At a higher frequency efficiency than approximately 2--3,bits/second/Hz, however, MIMO multiplexing exhibits performance superior to that of MIMO diversity since the impairments using higher data modulation and a higher channel coding rate in MIMO diversity overcomes the space diversity effect. The results also show that the receiver space diversity effect is very effective in MIMO multiplexing for maximum likelihood detection employing QR-decomposition and the M-algorithm (QRM-MLD) signal detection. Finally, we show that the real-time throughput of 500,Mbps and 1,Gbps in a 100-MHz transmission bandwidth is achieved at the average received Eb/N0 per receiver antenna of approximately 8.0 and 14.0,dB using 16QAM modulation and Turbo coding with the coding rate of 1/2 and 8/9 respectively in 4-by-4 MIMO multiplexing in a real propagation environment. Copyright © 2006 AEIT. [source]


    An adaptive min,max fair bandwidth allocation scheme for cellular multimedia networks

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 5 2006
    Mohammad Mahfuzul Islam
    Depending on the flexibility in controlling the transmission rate and the differences between on-line (real time) and off-line transmission modes, multimedia applications can potentially include a wide range of services, with the traditional stringent quality-of-service (QoS) requirement at the extreme to the highly adaptive ones that can tolerate or smartly adapt to the transient fluctuations in the QoS parameters. Keeping the cellular multimedia networks efficient with low call dropping and blocking rates and high bandwidth utilisation while maintaining a fair distribution of bandwidth by synergistically addressing the differences among these services remains a significant challenge. This paper addresses this issue by developing a novel min,max fairness scheme where bandwidth is distributed with equal share only after ensuring the minimum requirements. Besides borrowing in-use bandwidth through redistribution, this scheme also allows for using the reserved bandwidth for the offline services. Simulation results confirm the superiority of this scheme against the rate-based borrowing and the max,min fairness schemes, the two most recent works addressing similar issues. Copyright © 2005 AEIT. [source]


    Performance analysis of TH-UWB radio systems using proper waveform design in the presence of narrow-band interference

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 1 2006
    Hassan Khani
    Ultra-wide band (UWB) radio systems, because of their huge bandwidth, must coexist with many narrow-band systems in their frequency band. This coexistence may cause significant degradation in the performance of both kinds of systems. Currently, several methods exist for narrow-band interference (NBI) suppression in UWB radio systems. One of them is based on mitigating the effects of NBI through proper waveform design. In Reference 1, it has been shown that using properly designed doublet waveform can significantly reduce the effects of NBI on an important kind of UWB radio systems, i.e. BPSK time-hopping UWB (TH-UWB) systems. In this paper, the proper waveform design technique is extended to BPPM TH-UWB systems. It is shown that this method can properly suppress the effects of NBI on the performance of BPPM TH-UWB systems. Copyright © 2005 AEIT. [source]


    TCP-friendly transmission of voice over IP

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 3 2003
    F. Beritelli
    In the last few years an increasing amount of attention has been paid to technologies for the transmission of voice over IP (VoIP). At present, the UDP transport protocol is used to provide this service. However, when the same bottleneck link is shared with TCP flows, and in the presence of a high network load and congestion, UDP sources capture most of the bandwidth, strongly penalizing TCP sources. To solve this problem some congestion control should be introduced for UDP traffic as well, in such a way that this traffic becomes TCP-friendly. In this perspective, several TCP-friendly algorithms have been proposed in the literature. Among them, the most promising candidates for the immediate future are RAP and TFRC. However, although these algorithms were introduced to support real-time applications on the Internet, up to now the only target in optimizing them has been that of achieving fairness with TCP flows in the network. No attention has been paid to the applications using them, and in particular, to the quality of service (QoS) perceived by their users. The target of this paper is to analyze the problem of transmitting voice over IP when voice sources use one of these TCP-friendly algorithms. With this aim, a VoIP system architecture is introduced and the characteristics of each its elements are discussed. To optimize the system, a multirate voice encoder is used so as to be feasible to work over a TCP layer, and a modification of both RAP and TFRC is proposed. Finally, in order to analyze the performance of the proposed system architecture and to compare the modified RAP and TFRC with the original algorithms, the sources have been modeled with an arrival process modulated by a Markov chain, and the model has been used to generate traffic in a simulation study performed with the ns-2 network simulator. Copyright © 2003 AEI. [source]


    Phototunable Azobenzene Cholesteric Liquid Crystals with 2000 nm Range

    ADVANCED FUNCTIONAL MATERIALS, Issue 21 2009
    Timothy J. White
    Abstract Phototuning of more than 2000,nm is demonstrated in an azobenzene-based cholesteric liquid crystal (azo-CLC) consisting of a high-helical-twisting-power, axially chiral bis(azo) molecule (QL76). Phototuning range and rate are compared as a function of chiral dopant concentration, light intensity, and thickness. CLCs composed of QL76 maintain the CLC phase regardless of intensity or duration of exposure. The time necessary for the complete restoration of the original spectral properties (position, bandwidth, baseline transmission, and reflectivity) of QL76-based CLC is dramatically reduced from days to a few minutes by polymer stabilization of the CLC helix. [source]


    Seismic wave properties in time-dependent porosity homogeneous media

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2007
    G. Quiroga-Goode
    SUMMARY It is quantified the properties of seismic waves in fully saturated homogeneous porous media within the framework of Sahay's modified and reformulated poroelastic theory. The computational results comprise amplitude attenuation, velocity dispersion and seismic waveforms. They show that the behaviour of all four waves modelled as a function of offset, frequency, porosity, fluid viscosity and source bandwidth depicts realistic dissipation within the sonic,ultrasonic band. Therefore, it appears that there is no need to include material heterogeneity to model attenuation. By inference it is concluded that the fluid viscosity effects may be enhanced by dynamic porosity. [source]


    Reverse modelling for seismic event characterization

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2005
    Dirk Gajewski
    SUMMARY The localization of seismic events is of utmost importance in seismology and exploration. Current techniques rely on the fact that the recorded event is detectable at most of the stations of a seismic network. Weak events, not visible in the individual seismogram of the network, are missed out. We present an approach, where no picking of events in the seismograms of the recording network is required. The observed wavefield of the network is reversed in time and then considered as the boundary value for the reverse modelling. Assuming the correct velocity model, the reversely modelled wavefield focuses on the hypocentre of the seismic event. The origin time of the event is given by the time where maximum focussing is observed. The spatial extent of the focus resembles the resolution power of the recorded wavefield and the acquisition. This automatically provides the uncertainty in the localization with respect to the bandwidth of the recorded data. The method is particularly useful for the upcoming large passive networks since no picking is required. It has great potential for localizing very weak events, not detectable in the individual seismogram, since the reverse modelling sums the energy of all recorded traces and, therefore, enhances the signal-to-noise ratio similar to stacking in seismic exploration. The method is demonstrated by 2-D and 3-D numerical case studies, which show the potential of the technique. Events with a S/N ratio smaller than 1 where the events cannot be identified in the individual seismogram of the network are localized very well by the method. [source]


    Sea surface shape derivation above the seismic streamer

    GEOPHYSICAL PROSPECTING, Issue 6 2006
    Robert Laws
    ABSTRACT The rough sea surface causes perturbations in the seismic data that can be significant for time-lapse studies. The perturbations arise because the reflection response of the non-flat sea perturbs the seismic wavelet. In order to remove these perturbations from the received seismic data, special deconvolution methods can be used, but these methods require, as input, the time varying wave elevation above each hydrophone in the streamer. In addition, the vertical displacement of the streamer itself must also be known at the position of each hydrophone and at all times. This information is not available in conventional seismic acquisition. However, it can be obtained from the hydrophone measurements provided that the hydrophones are recorded individually (not grouped), that the recording bandwidth is extended down to 0.05 Hz and that data are recorded without gaps between the shot records. The sea surface elevation, and also the wave-induced vertical displacement of the streamer, can be determined from the time-varying pressure that the sea waves cause in the hydrophone measurements. When this was done experimentally, using a single sensor seismic streamer without a conventional low cut filter, the wave induced pressure variations were easily detected. The inversion of these experimental data gives results for the sea surface elevation that are consistent with the weather and sea state at the time of acquisition. A high tension approximation allows a simplified solution of the equations that does not demand a knowledge of the streamer tension. However, best results at the tail end of the streamer are obtained using the general equation. [source]


    Dispersion and radial depth of investigation of borehole modes

    GEOPHYSICAL PROSPECTING, Issue 4 2004
    Bikash K. Sinha
    ABSTRACT Sonic techniques in geophysical prospecting involve elastic wave velocity measurements that are performed by placing acoustic transmitters and receivers in a fluid-filled borehole. The signals recorded at the receivers are processed to obtain compressional- and shear-wave velocities in the surrounding formation. These velocities are generally used in seismic surveys for the time-to-depth conversion and other formation parameters, such as porosity and lithology. Depending upon the type of transmitter used (e.g. monopole or dipole) and as a result of eccentering, it is possible to excite axisymmetric (n= 0), flexural (n= 1) and quadrupole (n= 2) families of modes propagating along the borehole. We present a study of various propagating and leaky modes that includes their dispersion and attenuation characteristics caused by radiation into the surrounding formation. A knowledge of propagation characteristics of borehole modes helps in a proper selection of transmitter bandwidth for suppressing unwanted modes that create problems in the inversion for the compressional- and shear-wave velocities from the dispersive arrivals. It also helps in the design of a transmitter for a preferential excitation of a given mode in order to reduce interference with drill-collar or drilling noise for sonic measurements-while-drilling. Computational results for the axisymmetric family of modes in a fast formation with a shear-wave velocity of 2032 m/s show the existence of Stoneley, pseudo-Rayleigh and anharmonic cut-off modes. In a slow formation with a shear-wave velocity of 508 m/s, we find the existence of the Stoneley mode and the first leaky compressional mode which cuts in at approximately the same normalized frequency ,a/VS= 2.5 (a is the borehole radius) as that of the fast formation. The corresponding modes among the flexural family include the lowest-order flexural and anharmonic cut-off modes. For both the fast and slow formations, the first anharmonic mode cuts in at a normalized frequency ,a/VS= 1.5 approximately. Cut-off frequencies of anharmonic modes are inversely proportional to the borehole radius in the absence of any tool. The borehole quadrupole mode can also be used for estimating formation shear slownesses. The radial depth of investigation with a quadrupole mode is marginally less than that of a flexural mode because of its higher frequency of excitation. [source]


    Experimental validation of the wavefield transform of electromagnetic fields

    GEOPHYSICAL PROSPECTING, Issue 5 2002
    Kaushik Das
    The wavefield transform is a mathematical technique for transforming low-frequency electromagnetic (EM) signals to a non-diffusive wave domain. The ray approximation is valid in the transform space and this makes traveltime tomography for 3D mapping of the electrical conductivity distribution in the subsurface possible. The transform, however, imposes stringent frequency bandwidth and signal-to-noise ratio requirements on the data. Here we discuss a laboratory scale experiment designed to collect transform quality EM data, and to demonstrate the practical feasibility of transforming these data to the wavefield domain. We have used the scalable nature of EM fields to design a time-domain experiment using graphite blocks to simulate realistic field conditions while leaving the time scale undisturbed. The spatial dimensions have been scaled down by a factor of a thousand by scaling conductivity up by a factor of a million. The graphite blocks have two holes drilled into them to carry out cross-well and borehole-to-surface experiments. Steel sheets have been inserted between the blocks to simulate a conductive layer. Our experiments show that accurate EM data can be recorded on a laboratory scale model even when the scaling of some features, such as drill-hole diameters, is not maintained. More importantly, the time-domain EM data recorded in cross-well and surface-to-borehole modes can be usefully and accurately transformed to the wavefield domain. The observed wavefield propagation delay is proportional to the direct distance between the transmitter and receiver in a homogeneous medium. In a layered medium, data accuracy is reduced and, hence, our results are not so conclusive. On the basis of the experimental results we conclude that the wavefield transform could constitute a valid approach to the interpretation of accurate, undistorted time-domain data if further improvement in the transform can be realized. [source]