That isn't how they work at all. The smoke does not "block" the radiation in that way. And even if it did, a decay in the rate of emission would make the detector more sensitive, not less.
The theory is that uncontaminated air is relatively hard to ionize, while air with smoke in it is easily ionized, for example, by an alpha particle emitted by a small sample of americium-241. Therefore, the heart of the detector is an ionization cell that has two electrodes that have a voltage bias across them, and the americium is located inside this cell. If there is smoke in the air, the radiation will ionize it, and there will be a measurable current between the two electrodes in the form of pulses created by the ionization.
The average current is proportional to the rate at which the americium emits particles, so it will go down as the americium decays, but as Brian Gordon says, the half-life is very long (432 years), so it would have no real effect over the useful lifetime of the detector as a whole.
However, the current peaks caused by individual ionization events will always have the same amplitude, so the actual "sensitivity" of the detector to a particular concentration of smoke will be affected to an even lesser degree, depending on exactly how the detector electronics "integrates" the events to decide whether to trigger the alarm.
For example, it could be programmed to look for a surge that is, say, 3× the "background" rate, in which case, the sensitivity would not be affected at all, even as the background rate decays with time.