I want to know how to calculate the channel capacity without using the mathematical formula. By definition it is the maximum number of bits per second that you can send over the channel with a very low probability of error. So I was wondering if I make a very large number of Matlab simulations of transmitting symbols (BPSK modulated for example) from a transmitter to a receiver using a fixed Band width and a certain Signal to Interference Rate (SIR witch represents the power of the signal over the power of the noise), How can I measure the capacity of my channel ? My idea is that I can calculate the Bit Error Rate (the number of error bits over the total number of bits). I will fix a minimum acceptable probability of error (say BER=10^-6). Then I will calculate the number of correct received bits below my acceptable probability of error to find my channel capacity of that simulation. After many simulations I will take the mean of all my calculated capacity witch will represent my channel capacity for a certain modulation, band width and SIR.
1 Answers
Your proposal will not give you the channel capacity as defined in the Shannon-Hartley channel capacity formula.
The reason is that what the Shannon-Hartley formula tells us is that given a channel with a fixed SNR, there exists some coding scheme that achieves error-free transmission at the channel capacity. Put another way, the formula tells us the error free transmission rate we can achieve if we choose the best of all possible coding schemes.
Since your proposal tests the error rate with only one particular coding scheme, it is not likely to achieve the optimum transmission rate.
Actually finding the best possible scheme in all situations is not a solved problem and has been a major focus of communications research since 1948. That means we don't yet even know what all the possible coding schemes are, so there's no practical way to actually do a Monte-Carlo or other simulation to estimate the channel capacity.
In any case, why should we avoid using the formula? It's quite simple to calculate
$$C = B\log_2(1+\mathrm{SNR})$$
- 129,671
- 3
- 164
- 309
-
Thank you for the explanations. I was thinking that this formula applies only in an AWGN channel. In my case the noise is additive but it is a non Gaussian noise. It is an impulsive Noise. Do you think that I can use the formula even if my noise is non-Gaussian ? – ismail bsa Dec 10 '14 at 20:20
-
You're right that that formula is based on AWGN. For other noise, if you can calculate the probability of a symbol error, you can use the noisy channel coding theorem. Unfortunately the wiki page on this is not that great. – The Photon Dec 10 '14 at 21:46
-
The wiki page on channel capacity is a bit better --- at least it lays out the connection between the noisy channel coding theorem and the Shannon-Hartley formula. – The Photon Dec 10 '14 at 21:48
-
Is there any other way to compute the capacity (given a certain fixed conditions) of a single realization of the channel by just comparing received and sent data ? I Know that those mathematical formulas gives the upper bound of the maximal capacity that you can achieve using a certain coding decoding techniques.But if I propose a certain coding method how do I compare the capacity that I can achieve with this coding method to the maximal achievable capacity? – ismail bsa Dec 12 '14 at 20:32
-
If you are measuring the minimum input power at the receiver to achieve a certain bit error rate, that's called the sensitivity of the receiver. If you are measuring how much impairment can be added to the channel before a certain bit error rate is reached, that's called the margin in the link budget. Both are useful things to measure, but neither one is the same as the channel capacity. – The Photon Dec 12 '14 at 21:36
-
If you have a certain link configuration and you measure the BER, then you can work out a block code to achieve effectively error-free communications. You can compare its efficiency relative to the ideal channel capacity by just dividing your achieved information transfer rate by the rate given by the noisy channel coding theorem. – The Photon Dec 12 '14 at 21:39