Using a shared virtualenv for automation while also expecting the virtualenv's content to be (automatically) updated without the automation's knowledge/control is not a good practice - opening up for all kinds of obscure/intermittent/hard to reproduce failures.
So I wouldn't spend too much time trying to figure out a solid way of updating your package in the shared virtualenv without hitting the automation jobs - sooner or later they'll be hit by other problems stemming from the same root cause - using a shared virtualenv. Think back at the very reason(s) for using a virtualenv in the first place.
For consistent results automation should ensure that the virtualenv content is updated as needed as a preliminary step and does not (unexpectedly) change while the job is performed. Which isn't really possible with a virtualenv shared across multiple independent automation jobs. It might be possible with coordinated jobs, all sharing a common step that performs the virtualenv updates in a controlled manner to ensure virtualenv's consistency during jobs, but IMHO it's just not worthy to build such complex system, it's much simpler to not share the virtualenv and keep the jobs independent.
If, regardless of these considerations, the request for a shared virtualenv still stands then don't update the virtualenv outside (well defined and announced) maintenance windows during which execution of the automation jobs should be suspended.
As a side note it's also not a good practice to not version your packages, unless maybe if you have a very clear and easily accessible indication of the exact code version being used in each particular case (the commit SHA for example) - when problems occur you'll most likely need that info when attempting to reproduce the problem, debug it and/or verify fixes with some degree of certainty.