在gitlab runner上共享docker容器内的卷
因此,我尝试将一个包含项目文件的工作目录装载到gitlab runner上的一个子实例中,类似于DinD设置。我希望能够在docker实例中挂载一个卷,这将允许我到处乱搞和测试东西。像e2e测试之类…不需要编译新的容器来注入我需要的文件…理想情况下,这样我就可以在DinD环境中共享数据,而不必为运行的每个作业构建新的容器 我尝试了以下内容(使用Docker时未装入Docker卷:dind(#41227)·Issues·GitLab.org/GitLab FOSS·GitLab),我有一些目录正在装入,但这不是我要查找的项目数据 所以,在测试作业中,我创建了一个虚拟文件,我希望将目录装载到容器中并查看文件 我有一个测试ci-yml,它符合我的要求。我在要装载的卷I中创建了测试文件,我希望在目录列表中看到这些文件,但遗憾的是没有。在我第二次尝试时,我无法获取容器ID,因为标签不存在于转轮上,并且总是显示为空白…然而,第一阶段显示了希望,因为它在k8s之外的“shell”转轮上完美工作。但是,只要我把标签改成使用k8s跑步者,它就完了。我可以看到旧目录文件/web和我正在挂载的目录,但看不到其中的文件。奇怪 ci.yml在gitlab runner上共享docker容器内的卷,gitlab,kubernetes-helm,volumes,runner,dind,Gitlab,Kubernetes Helm,Volumes,Runner,Dind,因此,我尝试将一个包含项目文件的工作目录装载到gitlab runner上的一个子实例中,类似于DinD设置。我希望能够在docker实例中挂载一个卷,这将允许我到处乱搞和测试东西。像e2e测试之类…不需要编译新的容器来注入我需要的文件…理想情况下,这样我就可以在DinD环境中共享数据,而不必为运行的每个作业构建新的容器 我尝试了以下内容(使用Docker时未装入Docker卷:dind(#41227)·Issues·GitLab.org/GitLab FOSS·GitLab),我有一些目录正在
image: docker:stable
services:
- docker:dind
stages:
- compile
variables:
SHARED_PATH: /builds/$CI_PROJECT_PATH/shared/
DOCKER_DRIVER: overlay2
.test: &test
stage: compile
tags:
- k8s-vols
script:
- docker version
- 'export TESTED_IMAGE=$(echo ${CI_JOB_NAME} | sed "s/test //")'
- docker pull ${TESTED_IMAGE}
- 'export SHARED_PATH="$(dirname ${CI_PROJECT_DIR})/shared"'
- echo ${SHARED_PATH}
- echo ${CI_PROJECT_DIR}
- mkdir -p ${SHARED_PATH}
- touch ${SHARED_PATH}/test_file
- touch ${CI_PROJECT_DIR}/test_file2
- find ${SHARED_PATH}
#- find ${CI_PROJECT_DIR}
- docker run --rm -v ${CI_PROJECT_DIR}:/mnt ${TESTED_IMAGE} find /mnt
- docker run --rm -v ${CI_PROJECT_DIR}:/mnt ${TESTED_IMAGE} ls -lR /mnt
- docker run --rm -v ${SHARED_PATH}:/mnt ${TESTED_IMAGE} find /mnt
- docker run --rm -v ${SHARED_PATH}:/mnt ${TESTED_IMAGE} ls -lR /mnt
test alpine: *test
test ubuntu: *test
test centos: *test
testing:
stage: compile
tags:
- k8s-vols
image:
name: docker:stable
entrypoint: ["/bin/sh", "-c"]
script:
# get id of container
- export CONTAINER_ID=$(docker ps -q -f "label=com.gitlab.gitlab-runner.job.id=$CI_JOB_ID" -f "label=com.gitlab.gitlab-runner.type=build")
# get mount name
- export MOUNT_NAME=$(docker inspect $CONTAINER_ID -f "{{ range .Mounts }}{{ if eq .Destination \"/builds/${CI_PROJECT_NAMESPACE}\" }}{{ .Source }}{{end}}{{end}}" | cut -d "/" -f 6)
# run container
- docker run -v $MOUNT_NAME:/builds -w /builds/$CI_PROJECT_NAME --entrypoint=/bin/sh busybox -c "ls -la"
这是我正在处理的值文件
image: docker-registry.corp.com/base-images/gitlab-runner:alpine-v13.3.1
imagePullPolicy: IfNotPresent
gitlabUrl: http://gitlab.corp.com
runnerRegistrationToken: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
runnerToken: ""
unregisterRunners: true
terminationGracePeriodSeconds: 3600
concurrent: 5
checkInterval: 10
rbac:
create: true
resources: ["pods", "pods/exec", "secrets"]
verbs: ["get", "list", "watch","update", "create", "delete"]
clusterWideAccess: false
metrics:
enabled: true
runners:
image: docker-registry.corp.com/base-images/docker-dind:v1
imagePullPolicy: "if-not-present"
requestConcurrency: 5
locked: true
tags: "k8s-vols"
privileged: true
secret: gitlab-runner-vols
namespace: gitlab-runner-k8s-vols
pollTimeout: 180
outputLimit: 4096
kubernetes:
volumes:
- type: host_path
volume:
name: docker
host_path: /var/run/docker.sock
mount_path: /var/run/docker.sock
read_only: false
cache: {}
builds: {}
services: {}
helpers:
cpuLimit: 200m
memoryLimit: 256Mi
cpuRequests: 100m
memoryRequests: 128Mi
image: docker-registry.corp.com/base-images/gitlab-runner-helper:x86_64-latest
env:
NAME: VALUE
CI_SERVER_URL: http://gitlab.corp.com
CLONE_URL:
RUNNER_REQUEST_CONCURRENCY: '1'
RUNNER_EXECUTOR: kubernetes
REGISTER_LOCKED: 'true'
RUNNER_TAG_LIST: k8s-vols
RUNNER_OUTPUT_LIMIT: '4096'
KUBERNETES_IMAGE: ubuntu:18.04
KUBERNETES_PRIVILEGED: 'true'
KUBERNETES_NAMESPACE: gitlab-runners-k8s-vols
KUBERNETES_POLL_TIMEOUT: '180'
KUBERNETES_CPU_LIMIT:
KUBERNETES_MEMORY_LIMIT:
KUBERNETES_CPU_REQUEST:
KUBERNETES_MEMORY_REQUEST:
KUBERNETES_SERVICE_ACCOUNT:
KUBERNETES_SERVICE_CPU_LIMIT:
KUBERNETES_SERVICE_MEMORY_LIMIT:
KUBERNETES_SERVICE_CPU_REQUEST:
KUBERNETES_SERVICE_MEMORY_REQUEST:
KUBERNETES_HELPER_CPU_LIMIT:
KUBERNETES_HELPER_MEMORY_LIMIT:
KUBERNETES_HELPER_CPU_REQUEST:
KUBERNETES_HELPER_MEMORY_REQUEST:
KUBERNETES_HELPER_IMAGE:
KUBERNETES_PULL_POLICY:
securityContext:
fsGroup: 65533
runAsUser: 100
resources: {}
affinity: {}
nodeSelector: {}
tolerations: []
envVars:
- name: CI_SERVER_URL
value: http://gitlab.corp.com
- name: CLONE_URL
- name: RUNNER_REQUEST_CONCURRENCY
value: '1'
- name: RUNNER_EXECUTOR
value: kubernetes
- name: REGISTER_LOCKED
value: 'true'
- name: RUNNER_TAG_LIST
value: k8s-vols
- name: RUNNER_OUTPUT_LIMIT
value: '4096'
- name: KUBERNETES_IMAGE
value: ubuntu:18.04
- name: KUBERNETES_PRIVILEGED
value: 'true'
- name: KUBERNETES_NAMESPACE
value: gitlab-runner-k8s-vols
- name: KUBERNETES_POLL_TIMEOUT
value: '180'
- name: KUBERNETES_CPU_LIMIT
- name: KUBERNETES_MEMORY_LIMIT
- name: KUBERNETES_CPU_REQUEST
- name: KUBERNETES_MEMORY_REQUEST
- name: KUBERNETES_SERVICE_ACCOUNT
- name: KUBERNETES_SERVICE_CPU_LIMIT
- name: KUBERNETES_SERVICE_MEMORY_LIMIT
- name: KUBERNETES_SERVICE_CPU_REQUEST
- name: KUBERNETES_SERVICE_MEMORY_REQUEST
- name: KUBERNETES_HELPER_CPU_LIMIT
- name: KUBERNETES_HELPER_MEMORY_LIMIT
- name: KUBERNETES_HELPER_CPU_REQUEST
- name: KUBERNETES_HELPER_MEMORY_REQUEST
- name: KUBERNETES_HELPER_IMAGE
- name: KUBERNETES_PULL_POLICY
hostAliases:
- ip: "10.10.x.x"
hostnames:
- "ch01"
podAnnotations:
prometheus.io/path: "/metrics"
prometheus.io/scrape: "true"
prometheus.io/port: "9252"
podLabels: {}
因此,我对舵图做了一些调整。我在配置映射中添加了一个卷部分
config.toml: |
concurrent = {{ .Values.concurrent }}
check_interval = {{ .Values.checkInterval }}
log_level = {{ default “info” .Values.logLevel | quote }}
{{- if .Values.metrics.enabled }}
listen_address = ‘[::]:9252’
{{- end }}
volumes = ["/builds:/builds"]
#volumes = ["/var/run/docker.sock:/var/run/docker.sock", “/cache”, “/builds:/builds”]
我尝试使用最后一行,其中包括docker sock挂载,但当它运行时,它抱怨找不到mount docker.sock,file not found,因此我仅在本节中使用builds目录,并在values文件中添加了docker.sock挂载。而且似乎效果不错。除此之外的一切
我也看到过将跑步者设置为特权的例子,但这似乎对我没有多大帮助
当我运行管道时,这是输出
所以你看不到任何文件
感谢您花时间仔细考虑您的请求,这真的很有帮助