0

I'm running minikube v1.31.2 (Kubernetes v1.27.4 sur Docker 24.0.4) on Microsoft Windows 10 Pro, with HyperV driver

I want to use a local directory (windows host's directory) as a volume in minikube, that i can mount in pods.

According to documentation, i tried this :

 minikube start --mount --mount-string="d:\minikube-data\volumes\general:/mnt/general"

There is no error message. I can refer to this volume in deployment yaml like this :

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: scdf-2-10-3-oracle
  namespace: default
  labels:
    app: scdf-2-10-3-Oracle
spec:
  selector:
    matchLabels:
      app: scdf-2-10-3-Oracle-Maison
  template:
    metadata:
      labels:
        app: scdf-2-10-3-Oracle-Maison
    spec:
        containers:
          - name: scdf-2-10-3 
            image: dtacheron/scdf:2.10.3-oracle
            imagePullPolicy: IfNotPresent
            ports:
              - name: http
                containerPort: 9393
            volumeMounts:
              - mountPath: /data
                name: data-volume
                readOnly: false
            env:
              - name: SPRING_CLOUD_DATAFLOW_FEATURES_STREAMS_ENABLED
                value : "false"
              - name: SPRING_CLOUD_DATAFLOW_FEATURES_SCHEDULES_ENABLED
                value : "false"
              - name: SPRING_CLOUD_DATAFLOW_FEATURES_TASKS_ENABLED
                value : "true"
              - name: spring_datasource_url
                value : "xxx"
              - name: spring_datasource_username
                value: "xxx"
              - name: spring_datasource_password
                value: "xxx"
              - name: spring_datasource_driverClassName
                value: "oracle.jdbc.OracleDriver"
              - name: spring_datasource_initialization_mode
                value: "always"
              - name: SPRING_CLOUD_CONFIG_ENABLED
                value: "false"
              - name: SPRING_CLOUD_DATAFLOW_TASK_COMPOSEDTASKRUNNER_URI
                value: 'docker://springcloud/spring-cloud-dataflow-composed-task-runner:2.10.3'                
        volumes:                
          - name: data-volume
            hostPath:
              path: /mnt/general
              type: Directory

replicas: 1

this deployment is ok.

BUT in fact nothing works.

If i ssh to linux vm which runs k8s, i can see /mnt/general folder (root:root 755), i can write files. but i don't see the files from the windows folder, and in windows i don't see the files created in the vm. and after a minikube restart, the files created disapparead.

also docker user can't write files (and i suppose that my pod runs under docker user).

i suppose that there is something to do for HyperV gives minikube vm access to host file system but what ?

0 Answers0