1. 程式人生 > >argo的輸入輸出--output和input輸出目錄或檔案到下一步驟

argo的輸入輸出--output和input輸出目錄或檔案到下一步驟

轉載請註明出處:
argo的輸入輸出–output和input輸出目錄或檔案到下一步驟

有部分場景需要使用output把目錄或者檔案傳遞到下一個步驟。

argo提供了兩種方式
一種是引數方式parameter
一種是元件方式artifacts

各自適用於不同的場景,引數方式是把某個文字的內容讀取出來傳遞給下一步驟。
元件方式可以傳遞檔案本身或者檔案目錄。

引數方式parameter

引數方式的使用者配置比較簡單,參考如下:

# Output parameters provide a way to use the contents of a file,
# as a parameter value in a workflow. In that regard, they are
# similar in concept to script templates, with the difference being
# that the ouput parameter values are obtained via file contents
# instead of stdout (as with script templates). Secondly, there can
# be multiple 'output.parameters.xxx' in a single template, versus
# a single 'output.result' from a script template.
# 
# In this example, the 'whalesay' template produces an output
# parameter named 'hello-param', taken from the file contents of
# /tmp/hello_world.txt. This parameter is passed to a subsequent
# step as an input parameter to the template, 'print-message'.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: output-parameter-
spec:
  entrypoint: output-parameter
  templates:
  - name: output-parameter
    steps:
    - - name: generate-parameter
        template: whalesay
    - - name: consume-parameter
        template: print-message
        arguments:
          parameters:
          - name: message
            value: "{{steps.generate-parameter.outputs.parameters.hello-param}}"

  - name: whalesay
    container:
      image: docker/whalesay:latest
      command: [sh, -c]
      args: ["echo -n hello world > /tmp/hello_world.txt"]
    outputs:
      parameters:
      - name: hello-param
        valueFrom:
          path: /tmp/hello_world.txt

  - name: print-message
    inputs:
      parameters:
      - name: message
    container:
      image: docker/whalesay:latest
      command: [cowsay]
      args: ["{{inputs.parameters.message}}"]

github引數方式輸入輸出示例

STEPS模式和DAG模式的傳遞區別

注意steps模式的使用傳參方式是:

{{steps.generate-parameter.outputs.parameters.hello-param}}

如果是DAG templates 則使用 tasks 作為字首與其他步驟關聯, 例如
{{tasks.generate-artifact.outputs.artifacts.hello-art}}

元件方式artifacts

示例

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: output-artifacts-
spec:
  entrypoint: output-artifacts
  templates:
  - name: output-artifacts
    steps:
    - - name: generate-artifacts
        template: generate
    - - name: consume-artifacts
        template: consume
        arguments:
          artifacts:
          - name: in-artifact
            from: "{{steps.generate.outputs.artifacts.out-artifact}}"

  - name: generate
    container:
      image: docker/whalesay:latest
      command: [sh, -c]
      args: ["echo -n hello world > /tmp/hello_world.txt"]
    outputs:
      artifacts:
      - name: out-artifact
        path: /tmp/hello_world.txt

  - name: consume
    inputs:
      artifacts:
       - name: in-artifact
         path: /tmp/input.txt
    container:
      image: docker/whalesay:latest
      command: [sh, -c]
      args: ["        
        echo 'input artifact contents:' &&
        cat /tmp/input.txt
      "]

可能遇到的問題

controller is not configured with a default archive location

原因

元件方式需要有一箇中轉檔案的地方,所以需要給argo配置一個儲存引擎。
問題參考
原始碼參考

目前argo支援三種類型的儲存:
aws的s3,gcs(Google Cloud Storage),Minio

解決方案

在使用的地方配置s3儲存引擎

在outputs增加程式碼如下:

s3:
          endpoint: s3.amazonaws.com
          bucket: my-aws-bucket-name
          key: path/in/bucket/my-input-artifact.txt
          accessKeySecret:
            name: my-aws-s3-credentials
            key: accessKey
          secretKeySecret:
            name: my-aws-s3-credentials
            key: secretKey

如果s3的key已經在當前環境配置好,則不需要accessKeySecret和secretKeySecret配置。
如下:

s3:
          endpoint: s3.amazonaws.com
          bucket: my-aws-bucket-name
          key: path/in/bucket/my-input-artifact.txt        

完整示例程式碼如下:

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: output-artifacts-
spec:
  entrypoint: output-artifacts
  templates:
  - name: output-artifacts
    steps:
    - - name: generate-artifacts
        template: generate
    - - name: consume-artifacts
        template: consume
        arguments:
          artifacts:
          - name: in-artifact
            from: "{{steps.generate.outputs.artifacts.out-artifact}}"

  - name: generate
    container:
      image: docker/whalesay:latest
      command: [sh, -c]
      args: ["echo -n hello world > /tmp/hello_world.txt"]
    outputs:
      artifacts:
      - name: out-artifact
        path: /tmp/hello_world.txt
		s3:
          endpoint: s3.amazonaws.com
          bucket: my-aws-bucket-name
          key: path/in/bucket/my-input-artifact.txt

  - name: consume
    inputs:
      artifacts:
       - name: in-artifact
         path: /tmp/input.txt
    container:
      image: docker/whalesay:latest
      command: [sh, -c]
      args: ["        
        echo 'input artifact contents:' &&
        cat /tmp/input.txt
      "]

在配置檔案中統一配置s3儲存引擎

如果在每個使用的地方都去加s3的配置,那程式碼會很冗餘,argo有一個統一的配置可以進行設定。
使用命令編輯配置檔案如下:

kubectl edit configmap workflow-controller-configmap -n argo

增加內容如下:

data:
  config: |
    artifactRepository:
      s3:
        bucket: my-aws-bucket-name
        keyPrefix: prefix/in/bucket     #optional可選
        endpoint: s3.amazonaws.com       #AWS => s3.amazonaws.com; GCS => storage.googleapis.com
        insecure: true                  #omit for S3/GCS. Needed when minio runs without TLS
        accessKeySecret:
          name: my-aws-s3-credentials
          key: accessKey
        secretKeySecret:
          name: my-aws-s3-credentials
          key: secretKey

在配置檔案中統一配置minio儲存引擎

minio儲存引擎是argo自帶的儲存引擎。可以很方便的安裝。
使用minio之前需要先安裝,步驟如下:

官網步驟參考:
Install an Artifact Repository

需要翻牆

brew install kubernetes-helm # mac
helm init
helm install stable/minio --name argo-artifacts --set service.type=LoadBalancer --set persistence.enabled=false

使用命令編輯配置檔案如下:

kubectl edit configmap workflow-controller-configmap -n argo

增加內容如下:

data:
  config: |
    artifactRepository:
      s3:
        bucket: my-bucket
        endpoint: argo-artifacts-minio.default:9000
        insecure: true
        # accessKeySecret and secretKeySecret are secret selectors.
        # It references the k8s secret named 'argo-artifacts-minio'
        # which was created during the minio helm install. The keys,
        # 'accesskey' and 'secretkey', inside that secret are where the
        # actual minio credentials are stored.
        accessKeySecret:
          name: argo-artifacts
          key: accesskey
        secretKeySecret:
          name: argo-artifacts
          key: secretkey

注意,這裡的賬號密碼accessKeySecret和secretKeySecret的name和key都要根據自己的環境來設定。
而不是使用官網上的

AccessKey: AKIAIOSFODNN7EXAMPLE
SecretKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

獲取步驟如下:

kubectl get secret
輸出如下:
NAME                  TYPE                                  DATA      AGE
argo-artifacts        Opaque                                2         4h
default-token-2cvxb   kubernetes.io/service-account-token   3         61d
#則argo-artifacts是我們需要的name
kubectl get secret/argo-artifacts -o wide
kubectl describe secret/argo-artifacts
輸出
Name:         argo-artifacts
Namespace:    default
Labels:       app=minio
              chart=minio-1.9.1
              heritage=Tiller
              release=argo-artifacts
Annotations:  <none>

Type:  Opaque

Data
====
accesskey:  20 bytes
secretkey:  40 bytes

可以看到確實包含兩個金鑰檔案。

但是要看到裡面的金鑰值比較麻煩,需要新建一個掛載這個secret的pod才能看到,步驟如下:

建立一個Pod通過卷訪問祕密資料
下面是一個配置檔案可以用來建立一個Pod:

vi secret-pod.yaml

輸入內容如下:

apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
spec:
  containers:
    - name: test-container
      image: nginx
      volumeMounts:
          # name must match the volume name below
          - name: secret-volume
            mountPath: /etc/secret-volume
  # The secret data is exposed to Containers in the Pod through a Volume.
  volumes:
    - name: secret-volume
      secret:
        secretName: argo-artifacts

這裡的secretName要對應上面kubectl get secret得到的name。

1.建立Pod:

kubectl create -f secret-pod.yaml

2.驗證Pod是否執行:

kubectl get pod secret-test-pod

輸出:

NAME              READY     STATUS    RESTARTS   AGE
 secret-test-pod   1/1       Running   0          10s

3.使用shell進入到pod執行的容器裡面:

kubectl exec -it secret-test-pod -- /bin/bash

4.這個祕密資料公開在容器/etc/secret-volume目錄裡面通過卷掛載的方式。進入這個目錄,並檢視這個資料:

[email protected]:/# cd /etc/secret-volume

5.在shell裡面檢視/etc/secret-volume目錄下的檔案:

[email protected]:/etc/secret-volume# ls

輸出展示了兩個檔案,每一個都對應相應的祕密資料:

accesskey  secretkey

輸出文字:

cat accesskey
AKIAIOSFODNN7EXAMPLE
cat secretkey
wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

發現通過Helm安裝的argo-artifacts原始密碼跟官網上的值一樣。
但是這樣設定無法通過認證。
(目前有bug,預設密碼不可用)

可能遇到的問題:
secret ‘argo-artifacts-minio-user’ does not have the key ‘AKIAIOSFODNN7EXAMPLE’

原因
使用預設密碼無法通過驗證

解決方案(待官方回覆):
Minio - Default Artifact does not work

轉載請註明出處:
argo的輸入輸出–output和input輸出目錄或檔案到下一步驟

參考連結:

引數方式和元件方式的輸入輸出使用對比

Configuring Your Artifact Repository

轉載請註明出處:
argo的輸入輸出–output和input輸出目錄或檔案到下一步驟