Kubernetes Linkerd和k8s不工作

Kubernetes Linkerd和k8s不工作,kubernetes,feathersjs,linkerd,Kubernetes,Feathersjs,Linkerd,我正试图了解库伯内特斯的Linkeder。我正在使用LinkedDeamonset示例,该示例来自他们在我的本地minikube 它都部署在生产命名空间中。当我试图 http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs 什么也没发生。我的设置哪里出了问题 我

我正试图了解库伯内特斯的Linkeder。我正在使用LinkedDeamonset示例,该示例来自他们在我的本地
minikube

它都部署在
生产
命名空间中。当我试图

http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs
什么也没发生。我的设置哪里出了问题

我的链接yaml:

# runs linkerd in a daemonset, in linker-to-linker mode
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
data:
  config.yaml: |-
    admin:
      port: 9990

    namers:
    - kind: io.l5d.k8s
      experimental: true
      host: localhost
      port: 8001

    telemetry:
    - kind: io.l5d.prometheus
    - kind: io.l5d.recentRequests
      sampleRate: 0.25

    usage:
      orgId: linkerd-examples-daemonset

    routers:
    - protocol: http
      label: outgoing
      dtab: |
        /srv        => /#/io.l5d.k8s/production/http;
        /host       => /srv;
        /svc        => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.daemonset
          namespace: production
          port: incoming
          service: l5d
      servers:
      - port: 4140
        ip: 0.0.0.0
      responseClassifier:
        kind: io.l5d.retryableRead5XX

    - protocol: http
      label: incoming
      dtab: |
        /srv        => /#/io.l5d.k8s/production/http;
        /host       => /srv;
        /svc        => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.localnode
      servers:
      - port: 4141
        ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      containers:
      - name: l5d
        image: buoyantio/linkerd:0.9.1
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: outgoing
          containerPort: 4140
          hostPort: 4140
        - name: incoming
          containerPort: 4141
        - name: admin
          containerPort: 9990
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true

      - name: kubectl
        image: buoyantio/kubectl:v1.4.0
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: l5d
spec:
  selector:
    app: l5d
  type: LoadBalancer
  ports:
  - name: outgoing
    port: 4140
  - name: incoming
    port: 4141
  - name: admin
    port: 9990
以下是我对apiservice的部署:

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: apiserver-production
spec:
  replicas: 1
  template:
    metadata:
      name: apiserver
      labels:
        app: apiserver
        role: gateway
        env: production
    spec:
      dnsPolicy: ClusterFirst
      containers:
      - name: apiserver
        image: eu.gcr.io/xxxxx/apiservice:latest
        env:
        - name: MONGO_HOST
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: host
        - name: MONGO_PORT
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: port
        - name: MONGO_USR
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: username
        - name: MONGO_PWD
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: password
        - name: MONGO_DB
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: db
        - name: MONGO_PREFIX
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: prefix
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: http_proxy
          value: $(NODE_NAME):4140
        resources:
          limits:
            memory: "300Mi"
            cpu: "50m"
        imagePullPolicy: Always
        command:
        - "pm2-docker"
        - "processes.json"
        ports:
        - name: apiserver
          containerPort: 8080
      - name: kubectl
        image: buoyantio/kubectl:1.2.3
        args:
        - proxy
        - "-p"
        - "8001"
服务内容如下:

kind: Service
apiVersion: v1
metadata:
  name: apiserver
spec:
  selector:
    app: apiserver
    role: gateway
  type: LoadBalancer
  ports:
  - name: http
    port: 8080
  - name: external
    port: 80
    targetPort: 8080
在我的节点应用程序中,我使用的是
全局隧道

const server = app.listen(port);
server.on('listening', function(){

  // make sure all traffic goes over linkerd
  globalTunnel.initialize({
    host: 'localhost',
    port: 4140
  });

 console.log(`Feathers application started on ${app.get('host')}:${app.get('port')} `);

部署两个相同的节点应用程序,并让它们相互发送请求,这是可行的。奇怪的是,这些请求没有出现在linkerd仪表板中。

部署两个相同的节点应用程序,并让它们相互发送请求,这样就行了。奇怪的是,这些请求没有显示在Linked仪表板中。

您的
curl
命令在哪里运行

本例中的链接器服务不公开公共IP地址。您可以通过
kubectl get svc/l5d
确认这一点——我希望您不会看到任何外部IP


我认为您需要修改服务定义,或者创建一个额外的显式外部服务来公开
ClusterIP
,以便接收入口流量。

您的
curl
命令在哪里运行

本例中的链接器服务不公开公共IP地址。您可以通过
kubectl get svc/l5d
确认这一点——我希望您不会看到任何外部IP

我认为您需要修改服务定义,或者创建一个额外的显式外部服务来公开
ClusterIP
,以便接收入口流量

http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs`