0

I’ve been reading that interrupt callbacks must be very short. I have two questions:

  1. Imagine the while(1) loop inside the main. I guess this while(1) loops with clock frequency(?). Does this looping stop when the interrupt occurs and resume when the interrupt callback function finishes execution?

  2. What determines the proper max duration of an interrupt callback function?

Marcus Müller
  • 94,373
  • 5
  • 139
  • 252
GNZ
  • 1,729
  • 5
  • 28
  • 59

5 Answers5

1

I guess this while(1) loops with clock frequency(?).

If you mean that you have "one loop per clock cycle", you have to check the assembly generated by the compiler.

Does this looping stop when the interrupt occurs and resume when the interrupt callback function finishes execution?

This is exactly the purpose of interrupt service routines. The processor interrupts the main code, saves the context, executes the ISR to handle the interrupt, restores the context and resumes the main code.

What determines the proper max duration of an interrupt callback function?

It depends on how this interrupt is being executed: can the ISR itself be interrupted by other higher priority interrupt? Does the main code have soft real-time requirements that must be met? If it does, it can't be paused for too long.

devnull
  • 8,517
  • 2
  • 15
  • 39
1

I’ve been reading that interrupt callbacks must be very short.

They don't have to be short. It's that when you write a program, you will typically want them to be short, unless you want to risk things like having to handle the case that you need to serve the next interrupt while you're still serving an interrupt.

It's an architectural rule of thumb that interrupt service routines shall be short, not a technical necessity. But: It's a very good rule of thumb. Stick to it. But try to understand why your sources say you shouldn't have long interrupts.

I have two questions:

  1. Imagine the while(1) loop inside the main.

I can, because I see them all the time …

I guess this while(1) loops with clock frequency(?).

Are you saying it's an empty loop, like while(1) {} or while(1);??

You shouldn't do that. The right thing to do here is to tell the processor to … not do anything until an interrupt occurs. The reason is simple: if your compiler actually implements your while(1); as (pseudo-machine code) jump 0; to just jump to the same jump instruction, that will eat a lot of power.

But if it does that, it will probably take more than one cycle to jump; but that depends on your microarchitecture. It's not a relevant detail for a C programmer!

Does this looping stop when the interrupt occurs and resume when the interrupt callback function finishes execution?

Yep, that's what an interrupt does!

  1. What determines the proper max duration of an interrupt callback function?

What your application needs; no other statement can be given. For example, you might have something that gets data from an ADC in a timer interrupt service routine every 100 µs. Now, you also need to react to an external emergency signal at one GPIO pin in less than 50 µs, and you don't want the emergency interrupt to interrupt the timer interrupt (you can have that!), so you decide to limit your timer interrupt handler to < 50 µs to be sure things happen.

Notice how I said you can have interrupts interrupting interrupt handling? Nested interrupts with priority has been a standard feature for microcontrollers for around 30 years; some teaching material still claims things like "you must make sure your interrupt handling is short, because you will miss all other interrupts that happen in the meantime" as if it was the 1970s. So, really understand your material here, and don't believe any copied over old slideset.

Marcus Müller
  • 94,373
  • 5
  • 139
  • 252
1

Imagine you are cooking something for dinner. The recipe says that the mixture needs to be stirred until it reaches some level (For now, let's assume that this process is very important i.e. if you miss that level the taste and the texture will be partially lost.).

Now imagine someone knocking on the door. It's your neighbour, asking something important. To open the door you need to stop stirring. If you can keep it as short as possible there's still a chance to save the dinner.

Now imagine your phone rings while talking to your neighbour at the doorstep. You need to answer the phone because it's coming from your mum. But you are talking to your neighbour at the same time and you need to help them. You need to keep the talk with your neighbour as short as possible to answer the phone. And if you answer the phone you need to keep it as short as possible to, again, save the dinner.


Here, your main while() loop is stirring. And the door and the phone are basically, interrupts. You stop stirring to deal with these interrupts, but you can continue after dealing with them.

Depending on what you are doing in the main while() loop you may need to keep the interrupts as short as possible or alternatively, you can manage to do these things with interrupts (In the example above the main loop involves important tasks but this is not always the case). Also, depending on your software and hardware, you can nest them and/or prioritise them e.g. you can ask your neighbour to wait for some time while you are talking to your mum on the phone (nested interrupts), or if your phone and door ring at the same time you may want to answer your phone first (prioritisation).

Or if the hardware (MCU or MPU, whatever it is) is capable (e.g. you have a wireless earphone on your ear and a video doorbell of which the screen, speaker and mic are all accessible in the kitchen) you can talk to your mum and your neighbour at the same time while stirring (Multitasking?).

Rohat Kılıç
  • 33,940
  • 3
  • 29
  • 85
1

Does this looping stop when the interrupt occurs and resume when the interrupt callback function finishes execution?

Yep, that's what an interrupt does!

This is exactly the purpose of interrupt service routines.

Since I see several answers saying the above, I feel it might be interesting to explore the Devil's Advocate position.

Now, this point probably involves a bit more implementation detail than is desired (or even understandable?) here, and the fact that an architecture hasn't been defined suggests I should keep it general, which helps even less in concreteness.

If that is the case, then don't mind this answer; and, in any case, do do the usual [interrupt returns quickly], especially if anyone else ever needs to read your code!

So, that said:


The above is merely the most common use case. To be clear, it's a very common use case; common enough that it's also quite a strong assumption within most (all? ...all that are worth using? ) compilers / toolchains.

But it doesn't need to be the case.

In typical modern-ish architectures, what happens during an interrupt is: the CPU pushes the current address onto the stack (so it can be returned to later)*, jumps to some address to handle the interrupt (or fetches a corresponding address from memory and jumps to the specified location), and simply continues executing from there as if nothing else has changed. It's entirely up to the ISR (interrupt service routine) to handle whatever values the CPU has on hand -- in context -- in registers -- and to return to the place interrupted from, as if nothing at all happened (registers restored perfectly).

*Many architectures store this in a temporary register, but the most common use-case is to immediately push it onto the stack in software. Well, "most common" may be pushing it, as there are cases where registers don't need to be saved (the ISR is simple enough not to bother pushing things to stack). Whether this happens as a matter of hand optimization or compiler generated output, also varies widely.

Normally, an ISR pushes those registers onto the stack (or whatever substitute or equivalent the machine has), does its work, restores registers from stack, and returns to the interrupted location. The CPU simply jumps back and resumes execution.

(This also includes, by way of sufficient degree of generalization, machines with one interrupt, where software has to figure out where to go; in that case, the sub-ISR for each device is selected by some code common to all of them. Some registers may have been saved by the common code, which just shifts around who's responsible for what.)

We could very well write a sort of inverted interrupt system, where most of the time is spent in interrupt(s) instead, and the act of RETurning from the interrupt is equivalent to calling main() -- but not calling it from the top, or just anywhere, but precisely where it was interrupted from.

What use is that? It's certainly a powerful, and potentially fine-grained, method of control. Hm. Jumping into code "at random" will certainly not do, as registers need to be initialized to reasonable values for any given code passage to do something meaningful (let alone, its intended function).

At the very least, we'd need to record what state the CPU was in (registers and such, probably stack memory as well) upon interruption, and restore everything back exactly as it was found, on return. Anything more, is either more of the same, or we need to know a heck of a lot more about the active code, what it does, at what addresses it can be jumped into, and with what register values.

Well, even just the basics leave some interesting options. This mechanic could be the basis of a software debugger, where the CPU is interrupted as soon as it can be (i.e., just after executing the returned-to instruction), thus effectively single-stepping the CPU. Register values can be read out and modified at each step, or the return location modified, allowing stepping through the program in various ways. The ISR doesn't need to know about the interrupted code; it can assume the user knows what they're doing.

The interrupt could also be the elementary operation of a multitasking operating system, where execution paths may be divided between multiple locations (threads); context (registers and other data) is stored specific to each thread, and the interrupt system determines which one to fetch and return to based on a priority scheme.

Where does a thread come from? It's an object made (and executed) by the kernel, at will; we can construct entirely new context by initializing a thread object (including default registers and memory space) and jumping to it. As long as the code in that thread expects such a start, that's all that matters. Hence we can load and run application code at will. (At least, subject to memory limitations; some platforms have read-only code space, for example!)

With reflection (execution-time code analysis and generation), quite complex schemes of execution, function evaluation, object handling, and so on, can be constructed; this is very much state-of-the-art technology, usually by way of interpreted to JIT-compiled bytecode (to varying degrees in Java, JS, C#, etc.). (This is probably a poor description of things; I'm not very familiar with the advanced features, let alone internals, of these languages.)


Mind, just because you can, doesn't mean you should. There are more standardized ways to do this, perhaps using library functions, perhaps using more refined and well-understood RTOSs.

And, not all architectures can. Many (especially embedded platforms?) have an interrupt controller which monitors CPU activity / state / bus, and which can only be reset (cleared to generate another interrupt) when the CPU RETurns, that is, literally executes the interrupt-return instruction per se. This would make such interrupt service, if not impossible, at least more complicated. (For example, PUSHing a "fake" return address and RETing immediately, to clear the interrupt controller while continuing to execute the ISR; now other interrupts can overlap... but that includes the same interrupt again; one must be careful to avoid stack overflow!)

Tim Williams
  • 33,670
  • 2
  • 22
  • 95
1
  1. The main while loop will not run at clock frequency, so that's a poor guess. Every opcode takes time and the time is usually more than one clock cycle. Yes, the main while loop stops executing as the CPU goes to execute the interrupt. At least assuming a single core CPU. Otherwise one CPU core can keep executing main if another CPU core goes to execute the interrupt. When interrupt execution is finished, generally it returns back to execute whatever the CPU was previously executing, but that is not the only option, as you are free to decide what you want to happen in the interrupt code you write. Just think of pre-emptive multitasking where the executed task can be switched to another in a timer interrupt.

  2. You or your system requirements determine how long your interrupt can be and what you can do in the interrupt. Since your main while loop example does nothing you can stay forever executing the interrupt because it does not matter what you do in the interrupt and for how long. If the interrupt receives bytes from some communication interface then obviously the interrupt must handle the byte before the next byte arrives or it can't receive data properly.

Justme
  • 147,557
  • 4
  • 113
  • 291