Here is an example that helped me understand why we need both sets of labels.
Note: I'm still learning k8s too, so I have simplified things based on my understanding; it's possible something in this description is technically incorrect and I am not aware. Feel free for anyone to correct this. But I hope it helps answer the question.
Imagine you have the following deployment configured:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
selector:
matchLabels:
app: my-serviceA
release: free
template:
metadata:
labels:
app: my-serviceA
release: free
version: '254'
team: 'service-A-team'
The labels app and release in spec.selector.matchLabels tell the Deployment which pods it can control, e.g. to terminate a pod. The Deployment ignores any other labels set in spec.template.metadata.labels. It will only control pods with these app and release labels, regardless of what other labels might be set on those pods.
This means the pods can have other labels that will be ignored by the Deployment, which might change regularly, and which might be used by other things (k8s or other tools). For example, the team label might be used for routing alerts from that pod to the right on-call team. The version label helps engineers know what version of the code is running on that pod. Those labels might be updated regularly, e.g. the version number will be incremented with a new release, or the team monitoring the service may change (or change name), but the Deployment will not be affected by changes to those labels, because it is only watching app and release which should be very stable.
So when a new version of the app is released (254->255), the version label changes in spec.template.metadata.labels, but the Deployment knows that the old pods (with version 254) and the new pods (with version 255) all "belong" to it, because it ignores the version label and only considers the labels set in spec.selector.matchLabels.