kubernetesネットワークコンポーネントの概要(Flannel,Open vSwitch,Calico)


k 8 sのネットワークモデルはすべてのPodが直接連通可能な平坦なネットワーク空間にあると仮定した.これはk 8 sがGoogleから出ており,GCEではネットワークモデルをインフラストラクチャとして提供しているため,k 8 sはこのネットワークが既に存在すると仮定している.皆さんのプライベートなプラットフォーム施設にk 8 sクラスタを構築すると、このようなネットワークがすでに存在しているとは仮定できません.このネットワークを独自に実装し,異なるノード上のDockerコンテナ間の相互アクセスを先にオンにし,k 8 sを実行する必要がある.
コンテナネットワークモデルをサポートするオープンソースコンポーネントが複数存在します.このセクションでは、Flannel、Open vSwitch、ダイレクトルーティング、Calicoなど、一般的なネットワークコンポーネントとそのインストール構成方法について説明します.
1. Flannel
1.1 Flannel    
Flannel       k8s       ,           。
(1)    k8s,    Node  Docker          IP  。
(2)     IP            (Overlay Network),        ,                 。

         Flannel         。

    ,Flannel         flannel0   ,         docker0  ,         flanneld     。

flanneld     :
flanneld     etcd,  etcd       IP     ,    etcd   Pod     ,          Pod     ;
  flanneld    docker0     ,      Pod     , docker0           ,                  flanneld ,    Pod Pod         。
Flannel                , UDP, VxLAN, AWS VPC     ,        Flannel    。 flanneld  ,  flanneld  ,  docker0          ,    ,        Flannel   。    UDP。

Flannel        Node  Pod  IP       ?  Flannel     etcd            ,                    ,         ,     。 Flannel       ,          Docker   。Flannel    Docker                  。
--bip=172.17.18.1/24

      ,Flannel      Node    docker0      ,      Pod IP                  。

Flannel       k8s     ,            ,          flannel0    ,       flanneld  ,                ,              。

  ,Flannel       UDP        ,UDP         ,    、               ,      。

1.2 Flannel        
1)  etcd
  Flannel  etcd     ,         ,      。

2)  Flannel
     Node    Flannel。Flannel        :https://github.com/coreos/flannel/releases 。     flannel--linux-amd64.tar.gz  ,      flanneld mk-docker-opts.sh   /usr/bin ,     Flannel   。

3)  Flannel
     systemd     flanneld      。
        /usr/lib/systemd/system/flanneld.service:
[root@k8s-node1 sysconfig]# more /usr/lib/systemd/system/flanneld.service
[Unit]
Description=flanneld overlay address etcd agent
After=network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flannel
ExecStart=/usr/bin/flanneld -etcd-endpoints=http://10.0.2.15:2379 $FLANNEL_OPTIONS

[Install]
RequiredBy=docker.service
WantedBy=multi-user.target

      /etc/sysconfig/flannel,  etcd URL  :
[root@k8s-node2 sysconfig]# more flannel
# flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD="http://10.0.2.15:2379"

# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/coreos.com/network"

   flanneld    ,   etcd           ,       flanneld     Docker   IP   。
[root@k8s-master ~]# etcdctl set /coreos.com/network/config '{ "Network": "172.16.0.0/16" }'
{ "Network": "172.16.0.0/16" }
  Flannel   docker0  ,    Docker     ,     Docker  。

4)  Flannel  
systemctl daemon-reload
systemctl restart flanneld

5)    Docker  
systemctl daemon-reload
systemctl restart docker

6)  docker0   IP  
mk-docker-opts.sh -i
source /run/flannel/subnet.env
ifconfig docker0 ${FLANNEL_SUBNET}

         docker0 IP  flannel0   :
[root@k8s-node1 system]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:9f:89:14 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.4/24 brd 10.0.2.255 scope global noprefixroute dynamic enp0s3
       valid_lft 993sec preferred_lft 993sec
    inet6 fe80::a00:27ff:fe9f:8914/64 scope link
       valid_lft forever preferred_lft forever
3: docker0:  mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:c9:52:3d:15 brd ff:ff:ff:ff:ff:ff
    inet 172.16.70.1/24 brd 172.16.70.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:c9ff:fe52:3d15/64 scope link
       valid_lft forever preferred_lft forever
6: flannel0:  mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none
    inet 172.16.70.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
    inet6 fe80::4b31:c92f:8cc9:3a22/64 scope link flags 800
       valid_lft forever preferred_lft forever
[root@k8s-node1 system]#

  ,    Flannel       。

  ping     Node docker0       。   Node1(docker0 IP=172.16.70.1)   ping Node2 docker0(docker0 IP=172.16.13.1),  Flannel             Docker  :
[root@k8s-node1 system]# ifconfig flannel0
flannel0: flags=4305  mtu 1472
        inet 172.16.70.0  netmask 255.255.0.0  destination 172.16.70.0
        inet6 fe80::524a:4b9c:3391:7514  prefixlen 64  scopeid 0x20
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 5  bytes 420 (420.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 564 (564.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@k8s-node1 system]# ifconfig docker0
docker0: flags=4099  mtu 1500
        inet 172.16.70.1  netmask 255.255.255.0  broadcast 172.16.70.255
        inet6 fe80::42:c9ff:fe52:3d15  prefixlen 64  scopeid 0x20
        ether 02:42:c9:52:3d:15  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 648 (648.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@k8s-node1 system]# ping 172.16.13.1
PING 172.16.13.1 (172.16.13.1) 56(84) bytes of data.
64 bytes from 172.16.13.1: icmp_seq=1 ttl=62 time=1.63 ms
64 bytes from 172.16.13.1: icmp_seq=2 ttl=62 time=1.55 ms
^C
--- 172.16.13.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 1.554/1.595/1.637/0.057 ms

     etcd    Flannel   flannel0      IP       :
[root@k8s-master etcd]# etcdctl ls /coreos.com/network/subnets
/coreos.com/network/subnets/172.16.70.0-24
/coreos.com/network/subnets/172.16.13.0-24

[root@k8s-master etcd]# etcdctl get /coreos.com/network/subnets/172.16.70.0-24
{"PublicIP":"10.0.2.4"}
[root@k8s-master etcd]# etcdctl get /coreos.com/network/subnets/172.16.13.0-24
{"PublicIP":"10.0.2.5"}

2 Open vSwitch
2.1     
Open vSwitch            ,    Linux  bridge,         。Open vSwitch               (  ),  Open vSwitch with GRE/VxLAN。               OVS       。 k8s、Docker   ,       L3 L3   ,           。

  ,    Docker   docker0      ,              Node   docker0        。
  ,  Open vSwitch   ovs,    ovs-vsctl   ovs    gre  ,  gre          NodeIP        IP  。      IP         (      ,           )。
  , ovs         ,  Docker    。  ovs   Docker   ,     Docker     Docker        ,                。

2.2       
                  ,                 docker0  。ovs      docker0        ,        ovs  。ovs              ovs   GRE/VxLAN  ,           Node,   docker0 Pod。
        ,   Node              docker0   ,          ,         Node  Pod。

2.3 OVS with GRE/VxLAN       
OVS    ,           ,        ,          ,   OpenStack      。
    ,   Flannel      OverlayNetwork,  Pod Pod   ,  k8s、Docker        ,  k8s Service,          ,   etcd   Docker   k8s    docker0         。  OVS ,            。
  ,   OVS,  Flannel,    Overlay Network,  Pod Pod   ,             。              ,           。

2.4 Open vSwitch      
   Node  ,           。

1)   Node   ovs
        Node    selinux。
     Node   :
yum -y install openvswitch

  Open vSwitch    ,   ovsdb-server ovs-vswitchd    。
[root@k8s-node2 system]# systemctl start openvswitch
[root@k8s-node2 system]# systemctl status openvswitch
● openvswitch.service - Open vSwitch
   Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; disabled; vendor preset: disabled)
   Active: active (exited) since Sun 2018-06-10 17:06:40 CST; 6s ago
  Process: 8368 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 8368 (code=exited, status=0/SUCCESS)

Jun 10 17:06:40 k8s-node2.test.com systemd[1]: Starting Open vSwitch...
Jun 10 17:06:40 k8s-node2.test.com systemd[1]: Started Open vSwitch.
[root@k8s-node2 system]# ps -ef|grep ovs
root      8352     1  0 17:06 ?        00:00:00 ovsdb-server: monitoring pid 8353 (healthy)
root      8353  8352  0 17:06 ?        00:00:00 ovsdb-server /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info --remote=punix:/var/run/openvswitch/db.sock --private-key=db:Open_vSwitch,SSL,private_key --certificate=db:Open_vSwitch,SSL,certificate --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir --log-file=/var/log/openvswitch/ovsdb-server.log --pidfile=/var/run/openvswitch/ovsdb-server.pid --detach --monitor
root      8364     1  0 17:06 ?        00:00:00 ovs-vswitchd: monitoring pid 8365 (healthy)
root      8365  8364  0 17:06 ?        00:00:00 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor

2)     GRE  
        Node   ovs   br0,          GRE         ,   ovs   br0         docker0  Linux   。
    ,        docker0       。
 Node1    ,        :
(1)  ovs  
[root@k8s-node1 system]# ovs-vsctl add-br br0
(2)  GRE      ,remote_ip    eth0    
[root@k8s-node1 system]# ovs-vsctl add-port br0 gre1 -- set interface gre1 type=gre option:remote_ip=10.0.2.5
(3)  br0   docker0,        OVS  tunnel
[root@k8s-node1 system]# brctl addif docker0 br0
(4)  br0 docker0  
[root@k8s-node1 system]# ip link set dev br0 up
[root@k8s-node1 system]# ip link set dev docker0 up
(5)      
  10.0.2.5 10.0.2.4 docker0     172.16.20.0/24 172.16.10.0/24,                docker0    ,    24     OVS GRE       。       Node     docker0     172.16.0.0/16     :
[root@k8s-node1 system]# ip route add 172.16.0.0/16 dev docker0
(6)  Docker   iptables   Linux   ,      icmp          
[root@k8s-node1 system]# iptables -t nat -F
[root@k8s-node1 system]# iptables -F

 Node1          , Node2          。

     ,Node1   IP  、docker0 IP              :
[root@k8s-node1 system]# ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:9f:89:14 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.4/24 brd 10.0.2.255 scope global noprefixroute dynamic enp0s3
       valid_lft 842sec preferred_lft 842sec
    inet6 fe80::a00:27ff:fe9f:8914/64 scope link
       valid_lft forever preferred_lft forever
3: docker0:  mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:c9:52:3d:15 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.1/24 brd 172.16.10.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:c9ff:fe52:3d15/64 scope link
       valid_lft forever preferred_lft forever
10: ovs-system:  mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 5e:a9:02:75:aa:98 brd ff:ff:ff:ff:ff:ff
11: br0:  mtu 1500 qdisc noqueue master docker0 state UNKNOWN group default qlen 1000
    link/ether 82:e3:9a:29:3c:46 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a8de:24ff:fef4:f8ec/64 scope link
       valid_lft forever preferred_lft forever
12: gre0@NONE:  mtu 1476 qdisc noop state DOWN group default qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0
13: gretap0@NONE:  mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
14: gre_system@NONE:  mtu 65490 qdisc pfifo_fast master ovs-system state UNKNOWN group default qlen 1000
    link/ether 76:53:6f:11:e0:f8 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7453:6fff:fe11:e0f8/64 scope link
       valid_lft forever preferred_lft forever
[root@k8s-node1 system]#

[root@k8s-node1 system]# ip route
default via 10.0.2.1 dev enp0s3 proto dhcp metric 100
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.4 metric 100
172.16.0.0/16 dev docker0 scope link
172.16.10.0/24 dev docker0 proto kernel scope link src 172.16.10.1

3)  Node          
[root@k8s-node1 system]# ping 172.16.20.1
PING 172.16.20.1 (172.16.20.1) 56(84) bytes of data.
64 bytes from 172.16.20.1: icmp_seq=1 ttl=64 time=2.39 ms
64 bytes from 172.16.20.1: icmp_seq=2 ttl=64 time=3.36 ms
^C
--- 172.16.20.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1004ms
rtt min/avg/max/mdev = 2.398/2.882/3.366/0.484 ms
[root@k8s-node1 system]#

 Node2     :
[root@k8s-node2 system]# tcpdump -i docker0 -nnn
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on docker0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:43:59.020039 IP 172.16.10.1 > 172.16.20.1: ICMP echo request, id 20831, seq 26, length 64
23:43:59.020096 IP 172.16.20.1 > 172.16.10.1: ICMP echo reply, id 20831, seq 26, length 64
23:44:00.020899 IP 172.16.10.1 > 172.16.20.1: ICMP echo request, id 20831, seq 27, length 64
23:44:00.020939 IP 172.16.20.1 > 172.16.10.1: ICMP echo reply, id 20831, seq 27, length 64
23:44:01.021706 IP 172.16.10.1 > 172.16.20.1: ICMP echo request, id 20831, seq 28, length 64
23:44:01.021750 IP 172.16.20.1 > 172.16.10.1: ICMP echo reply, id 20831, seq 28, length 64

                      2   RC     ,              Pods      :
[root@k8s-master ~]# more frontend-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: frontend
  labels:
    name: frontend
spec:
   replicas: 2
   selector:
     name: frontend
   template:
     metadata:
       labels:
         name: frontend
     spec:
       containers:
       - name: php-redis
         image: kubeguide/guestbook-php-frontend
         ports:
         - containerPort: 80
           hostPort: 80
         env:
         - name: GET_HOSTS_FROM
           value: env
[root@k8s-master ~]#

        :
[root@k8s-master ~]# kubectl get rc
NAME       DESIRED   CURRENT   READY     AGE
frontend   2         2         2         33m
[root@k8s-master ~]# kubectl get pods -o wide
NAME             READY     STATUS    RESTARTS   AGE       IP            NODE
frontend-b6krg   1/1       Running   1          33m       172.16.20.2   10.0.2.5
frontend-qk6zc   1/1       Running   0          33m       172.16.10.2   10.0.2.4

        Node1        :
[root@k8s-master ~]# kubectl exec -it frontend-qk6zc -c php-redis /bin/bash
root@frontend-qk6zc:/var/www/html# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: gre0@NONE:  mtu 1476 qdisc noop state DOWN group default qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0@NONE:  mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
22: eth0@if23:  mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:10:0a:02 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.2/24 brd 172.16.10.255 scope global eth0
       valid_lft forever preferred_lft forever
root@frontend-qk6zc:/var/www/html#
 Node1    Pod ping  Nod2    Pod   :
root@frontend-qk6zc:/var/www/html# ping 172.16.20.2
PING 172.16.20.2 (172.16.20.2): 56 data bytes
64 bytes from 172.16.20.2: icmp_seq=0 ttl=63 time=2017.587 ms
64 bytes from 172.16.20.2: icmp_seq=1 ttl=63 time=1014.193 ms
64 bytes from 172.16.20.2: icmp_seq=2 ttl=63 time=13.232 ms
64 bytes from 172.16.20.2: icmp_seq=3 ttl=63 time=1.122 ms
64 bytes from 172.16.20.2: icmp_seq=4 ttl=63 time=1.379 ms
64 bytes from 172.16.20.2: icmp_seq=5 ttl=63 time=1.474 ms
64 bytes from 172.16.20.2: icmp_seq=6 ttl=63 time=1.371 ms
64 bytes from 172.16.20.2: icmp_seq=7 ttl=63 time=1.583 ms
^C--- 172.16.20.2 ping statistics ---
8 packets transmitted, 8 packets received, 0% packet loss
round-trip min/avg/max/stddev = 1.122/381.493/2017.587/701.350 ms
root@frontend-qk6zc:/var/www/html#
 Node2            :
[root@k8s-node2 system]# tcpdump -i docker0 -nnn
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on docker0, link-type EN10MB (Ethernet), capture size 262144 bytes
00:13:18.601908 IP 172.16.10.2 > 172.16.20.2: ICMP echo request, id 38, seq 4, length 64
00:13:18.601947 IP 172.16.20.2 > 172.16.10.2: ICMP echo reply, id 38, seq 4, length 64
00:13:18.601956 IP 172.16.20.2 > 172.16.10.2: ICMP echo reply, id 38, seq 4, length 64
00:13:28.609109 IP 172.16.10.2 > 172.16.20.2: ICMP echo request, id 38, seq 5, length 64
00:13:28.609165 IP 172.16.20.2 > 172.16.10.2: ICMP echo reply, id 38, seq 5, length 64
00:13:28.609179 IP 172.16.20.2 > 172.16.10.2: ICMP echo reply, id 38, seq 5, length 64
00:13:29.612564 IP 172.16.10.2 > 172.16.20.2: ICMP echo request, id 38, seq 6, length 64

 :                ,     Node    firewalld     。

  ,  OVS       ,  GRE           ,       Node,     N*(N-1) GRE  ,   Node         ,       。

3     
                           ,  Node          ,        。             ,          ,         Node       ,    。                   ,      Node    ,                    。
                   ,    Quagga、Zebra 。
                。
(1)          Node    Docker bridge    
        docker0      ,      bridge   --bridge=XX        ,       Node Docker            。
(2)     Node   Quagga
              Quagga     ,     Quagga     。   Node   Docker  :
# docker pull georce/router
   Node   Quagga  ,      ,Quagga   --privileged      ,    --net=host,            。
# docker run -itd --name=router --privileged --net=host georce/router

     , Node  Quagga              docker0       。
  ,  Node  docker0        。
 :          Node  ,                。

4 Calico         
4.1 Calico  
Calico              ,             ,      overlay         ,     3       。                IP     ,            。      ,                ,          。          ,       (         )                  ,calico      BGP     ,    Border Gateway Protocol。             kubernetes、      AWS、GCE  ,           openstack   Iaas   。

Calico         Linux Kernel        vRouter       。  vRouter  BGP                     Calico    ,                  。Calico                 IP            。Calico                   (L2  L3),      NAT、    Overlay Network,         ,    CPU  ,        。Calico           。

Calico             ,              BGP route reflector   。

  ,Calico  iptables           ,   k8s Network Policy  ,               。

Calico       :
Felix:Calico Agent,     Node ,          (IP  、    、iptables   ),          。
etcd:Calico       。
BGP Client(BIRD):   Felix  Node          BGP     Calico  。
BGP Route Reflector(BIRD):        BGP Route Reflector               。
calicoctl:Calico       。
4.2   Calico  
 k8s   Calico          。
4.2.1   kubernetes       ,     
  Master kube-apiserver       :--allow-privileged=true(  Calico-node           Node )。
   Node kubelet       :--network-plugin=cni(  CNI    ), --allow-privileged=true
    K8s      Node:Node1(10.0.2.4) Node2(10.0.2.5)

4.2.2   Calico  ,    Calico-node Calico policy controller
            :
  ConfigMap calico-config,  Calico       。
  Secret calico-etcd-secrets,    TLS    etcd。
   Node   calico/node  ,   DaemonSet。
   Node   Calico CNI            ( install-cni    )。
      calico/kube-policy-controller Deployment,   k8s    Pod   Network Policy。
4.2.3 Calico            
 Calico    Calico yaml    ,     https://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/hosted/calico.yaml 。
           Calico            。          。
(1)Calico      ConfigMap      ,    
# Calico Version v2.1.5
# https://docs.projectcalico.org/v2.1/releases#v2.1.5
# This manifest includes the following component versions:
#   calico/node:v1.1.3
#   calico/cni:v1.8.0
#   calico/kube-policy-controller:v0.5.4

# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Configure this with the location of your etcd cluster.
  etcd_endpoints: "http://10.0.2.15:2379"

  # Configure the Calico backend to use.
  calico_backend: "bird"

  # The CNI network configuration to install on each node.
  cni_network_config: |-
    {
        "name": "k8s-pod-network",
        "type": "calico",
        "etcd_endpoints": "__ETCD_ENDPOINTS__",
        "etcd_key_file": "__ETCD_KEY_FILE__",
        "etcd_cert_file": "__ETCD_CERT_FILE__",
        "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
        "log_level": "info",
        "ipam": {
            "type": "calico-ipam"
        },
        "policy": {
            "type": "k8s",
            "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
            "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
        },
        "kubernetes": {
            "kubeconfig": "__KUBECONFIG_FILEPATH__"
        }
    }

  # If you're using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files.
  etcd_ca: ""   # "/calico-secrets/etcd-ca"
  etcd_cert: "" # "/calico-secrets/etcd-cert"
  etcd_key: ""  # "/calico-secrets/etcd-key"

      :
etcd_endpoints:Calico  etcd          ,     etcd   ,    k8s Master   etcd,       。
calico_backend:Calico   ,   bird。
cni_network_config:  CNI       。  type=calico  kubelet  /opt/cni/bin       “Calico”      ,             。ipam type=calico-ipam  kubelet  /opt/cni/bin       "calico-ipam"      ,      IP     。
etcd     TLS    ,         ca、cert、key   。

(2)  etcd   secret,   TLS etcd  , data      
# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: calico-etcd-secrets
  namespace: kube-system
data:
  # Populate the following files with etcd TLS configuration if desired, but leave blank if
  # not using TLS for etcd.
  # This self-hosted install expects three files with the following names.  The values
  # should be base64 encoded strings of the entire contents of each file.
  # etcd-key: null
  # etcd-cert: null
  # etcd-ca: null

(3)calico-node, Daemonset     Node     calico-node     install-cni  
# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  template:
    metadata:
      labels:
        k8s-app: calico-node
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        scheduler.alpha.kubernetes.io/tolerations: |
          [{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
           {"key":"CriticalAddonsOnly", "operator":"Exists"}]
    spec:
      hostNetwork: true
      containers:
        # Runs calico/node container on each Kubernetes node.  This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: quay.io/calico/node:v1.1.3
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Configure the IP Pool from which Pod IPs will be chosen.
            - name: CALICO_IPV4POOL_CIDR
              value: "192.168.0.0/16"
            - name: CALICO_IPV4POOL_IPIP
              value: "always"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging to "info"
            - name: FELIX_LOGSEVERITYSCREEN
              value: "info"
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Auto-detect the BGP IP address.
            - name: IP
              value: ""
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /calico-secrets
              name: etcd-certs
        # This container installs the Calico CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: quay.io/calico/cni:v1.8.0
          command: ["/install-cni.sh"]
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
            - mountPath: /calico-secrets
              name: etcd-certs
      volumes:
        # Used by calico/node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d
        # Mount in the etcd TLS secrets.
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets

 Pod         :
calico-node:Calico    ,    Pod     ,  Pod     Node    ,     hostNetwork    ,         。
install-cni:  Node   CNI      /opt/cni/bin   ,             /etc/cni/net.d   。
calico-node         :
CALICO_IPV4POOL_CIDR:Calico IPAM IP   ,Pod IP           。
CALICO_IPV4POOL_IPIP:    IPIP  。  IPIP   ,Calico  Node       "tunl0"     。
FELIX_IPV6SUPPORT:    IPV6。
FELIX_LOGSEVERITYSCREEN:    。
IP Pool        :BGP IPIP  。
  IPIP   ,  CALICO_IPV4POOL_IPIP=“always”,   IPIP   ,  CALICO_IPV4POOL_IPIP="off",     BGP  。

IPIP     Node        tunnel,             。  IPIP   ,Calico   Node       "tunl0"       。     。

BGP                 (vRouter),       tunnel。

(4)calico-policy-controller  
    k8s    Pod   Network Policy。
# This manifest deploys the Calico policy controller on Kubernetes.
# See https://github.com/projectcalico/k8s-policy
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: calico-policy-controller
  namespace: kube-system
  labels:
    k8s-app: calico-policy
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ''
    scheduler.alpha.kubernetes.io/tolerations: |
      [{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
       {"key":"CriticalAddonsOnly", "operator":"Exists"}]
spec:
  # The policy controller can only have a single active instance.
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-policy-controller
      namespace: kube-system
      labels:
        k8s-app: calico-policy
    spec:
      # The policy controller must run in the host network namespace so that
      # it isn't governed by policy that would prevent it from working.
      hostNetwork: true
      containers:
        - name: calico-policy-controller
          image: quay.io/calico/kube-policy-controller:v0.5.4
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # The location of the Kubernetes API.  Use the default Kubernetes
            # service for API access.
            - name: K8S_API
              value: "https://kubernetes.default:443"
            # Since we're running in the host namespace and might not have KubeDNS
            # access, configure the container's /etc/hosts to resolve
            # kubernetes.default to the correct service clusterIP.
            - name: CONFIGURE_ETC_HOSTS
              value: "true"
          volumeMounts:
            # Mount in the etcd TLS secrets.
            - mountPath: /calico-secrets
              name: etcd-certs
      volumes:
        # Mount in the etcd TLS secrets.
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets

   k8s      Pod Network Policy  ,calico-policy-controller        Node  calico-node  ,          iptables  ,  Pod          。

              ,       Calico       。
[root@k8s-master ~]# kubectl create -f calico.yaml
configmap "calico-config" created
secret "calico-etcd-secrets" created
daemonset "calico-node" created
deployment "calico-policy-controller" created
[root@k8s-master ~]#
         :
[root@k8s-master ~]# kubectl get pods --namespace=kube-system -o wide
NAME                                        READY     STATUS    RESTARTS   AGE       IP         NODE
calico-node-59n9j                           2/2       Running   1          9h        10.0.2.5   10.0.2.5
calico-node-cksq5                           2/2       Running   1          9h        10.0.2.4   10.0.2.4
calico-policy-controller-54dbfcd7c7-ctxzz   1/1       Running   0          9h        10.0.2.5   10.0.2.5
[root@k8s-master ~]#

[root@k8s-master ~]# kubectl get rs --namespace=kube-system
NAME                                  DESIRED   CURRENT   READY     AGE
calico-policy-controller-54dbfcd7c7   1         1         1         9h
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get deployment --namespace=kube-system
NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
calico-policy-controller   1         1         1            1           9h
[root@k8s-master ~]# kubectl get secret --namespace=kube-system
NAME                  TYPE      DATA      AGE
calico-etcd-secrets   Opaque    0         9h
[root@k8s-master ~]# kubectl get configmap --namespace=kube-system
NAME            DATA      AGE
calico-config   6         9h
[root@k8s-master ~]#

    Node1 :
[root@k8s-node1 ~]# docker ps
CONTAINER ID        IMAGE                                      COMMAND             CREATED             STATUS              PORTS               NAMES
dd431155ed2d        quay.io/calico/cni                         "/install-cni.sh"   8 hours ago         Up 8 hours                              k8s_install-cni_calico-node-cksq5_kube-system_e3ed0d80-6fe9-11e8-8a4a-080027800835_0
e7f20b684fc2        quay.io/calico/node                        "start_runit"       8 hours ago         Up 8 hours                              k8s_calico-node_calico-node-cksq5_kube-system_e3ed0d80-6fe9-11e8-8a4a-080027800835_1
1c9010e4b661        gcr.io/google_containers/pause-amd64:3.0   "/pause"            8 hours ago         Up 8 hours                              k8s_POD_calico-node-cksq5_kube-system_e3ed0d80-6fe9-11e8-8a4a-080027800835_1
[root@k8s-node1 ~]#
[root@k8s-node1 ~]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
cloudnil/pause-amd64                   3.0                 66c684b679d2        11 months ago       747kB
gcr.io/google_containers/pause-amd64   3.0                 66c684b679d2        11 months ago       747kB
quay.io/calico/cni                     v1.8.0              8de7b24bd7ec        13 months ago       67MB
quay.io/calico/node                    v1.1.3              573ddcad1ff5        13 months ago       217MB
kubeguide/guestbook-php-frontend       latest              47ee16830e89        23 months ago       510MB
Node2     Pod:calico-policy-controller
[root@k8s-node2 ~]# docker ps
CONTAINER ID        IMAGE                                      COMMAND              CREATED             STATUS              PORTS               NAMES
ff4dbcd77892        quay.io/calico/kube-policy-controller      "/dist/controller"   8 hours ago         Up 8 hours                              k8s_calico-policy-controller_calico-policy-controller-54dbfcd7c7-ctxzz_kube-system_e3f067be-6fe9-11e8-8a4a-080027800835_0
60439cfbde00        quay.io/calico/cni                         "/install-cni.sh"    8 hours ago         Up 8 hours                              k8s_install-cni_calico-node-59n9j_kube-system_e3efa53c-6fe9-11e8-8a4a-080027800835_1
c55f279ef3c1        quay.io/calico/node                        "start_runit"        8 hours ago         Up 8 hours                              k8s_calico-node_calico-node-59n9j_kube-system_e3efa53c-6fe9-11e8-8a4a-080027800835_0
17d08ed5fd86        gcr.io/google_containers/pause-amd64:3.0   "/pause"             8 hours ago         Up 8 hours                              k8s_POD_calico-node-59n9j_kube-system_e3efa53c-6fe9-11e8-8a4a-080027800835_1
aa85ee06190f        gcr.io/google_containers/pause-amd64:3.0   "/pause"             8 hours ago         Up 8 hours                              k8s_POD_calico-policy-controller-54dbfcd7c7-ctxzz_kube-system_e3f067be-6fe9-11e8-8a4a-080027800835_0
[root@k8s-node2 ~]#
[root@k8s-node2 ~]#
[root@k8s-node2 ~]# docker images
REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
cloudnil/pause-amd64                    3.0                 66c684b679d2        11 months ago       747kB
gcr.io/google_containers/pause-amd64    3.0                 66c684b679d2        11 months ago       747kB
quay.io/calico/cni                      v1.8.0              8de7b24bd7ec        13 months ago       67MB
quay.io/calico/node                     v1.1.3              573ddcad1ff5        13 months ago       217MB
quay.io/calico/kube-policy-controller   v0.5.4              ac66b6e8f19e        14 months ago       22.6MB
kubeguide/guestbook-php-frontend        latest              47ee16830e89        23 months ago       510MB
georce/router                           latest              f3074d9a8369        3 years ago         190MB
[root@k8s-node2 ~]#

calico-node       ,   CNI  , /etc/cni/net.d/            ,  /opt/cni/bin          calico calico-ipam, kubelet  。
10-calico.conf:  CNI       ,  type=calico             calico。
calico-kubeconfig:Calico   kubeconfig  。
calico-tls  : TLS    etcd     。

[root@k8s-node1 ~]# cd /etc/cni/net.d/
[root@k8s-node1 net.d]# ls
10-calico.conf  calico-kubeconfig  calico-tls
[root@k8s-node1 net.d]#
[root@k8s-node1 net.d]# ls /opt/cni/bin
calico  calico-ipam  flannel  host-local  loopback
[root@k8s-node1 net.d]#

  k8s node1          ,          "tunl0"   ,         192.168.196.128
[root@k8s-node1 net.d]# ifconfig tunl0
tunl0: flags=193  mtu 1440
        inet 192.168.196.128  netmask 255.255.255.255
        tunnel   txqueuelen 1000  (IPIP Tunnel)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  k8s node2          ,          "tunl0"   ,         192.168.19.192
[root@k8s-node2 ~]# ifconfig tunl0
tunl0: flags=193  mtu 1440
        inet 192.168.19.192  netmask 255.255.255.255
        tunnel   txqueuelen 1000  (IPIP Tunnel)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        calico-node IP   192.168.0.0/16      。  ,docker0  k8s  Pod IP        。

          。    node1        node2   192.168.19.192       :
[root@k8s-node1 net.d]# ip route
default via 10.0.2.1 dev enp0s3 proto dhcp metric 100
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.4 metric 100
172.16.10.0/24 dev docker0 proto kernel scope link src 172.16.10.1
192.168.19.192/26 via 10.0.2.5 dev tunl0 proto bird onlink
blackhole 192.168.196.128/26 proto bird
[root@k8s-node1 net.d]#

    node2       ,         node1  192.168.196.128       :
[root@k8s-node2 ~]# ip route
default via 10.0.2.1 dev enp0s3 proto dhcp metric 100
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.5 metric 100
172.16.20.0/24 dev docker0 proto kernel scope link src 172.16.20.1
blackhole 192.168.19.192/26 proto bird
192.168.196.128/26 via 10.0.2.4 dev tunl0 proto bird onlink

    Calico    Node        。    Pod     ,kubelet   CNI    Calico  Pod     ,  IP  、    、iptables   。

    CALICO_IPV4POOL_IPIP="off",    IPIP  , Calico     tunl0    ,                      。

4.3         Pod      
Calico    Pod      ,         。

          Nginx Pod  ,      Pod           ,    Label "role=nginxclient" Pod  Nginx  ,  Label          。
  1:
            Namespace    ,      Pod  Namespace default ,              :
# kubectl annotate ns default
"net.beta.kubernetes.io/network-policy={\"ingress\": {\"isolation\": \"DefaultDeny\"}}"
     ,default   Pod           。

  2:  Nginx Pod,   Label "app=nginx"
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  containers:
  name: nginx
  image: nginx

  3: Nginx         
networkpolicy-allow-nginxclient.yaml
kind: NetworkPolicy
apiVersion: extension/v1beta1
metadata:
  name: allow-nginxclient
spec:
  podSelector:
    matchLabels:
      app: nginx
  ingress:
    - from:
      - podSelector:
          matchLabels:
            role: nginxclient
      ports:
      - protocol: TCP
        port: 80

  Pod   Label "app=nginx",        Pod  Label "role=nginxclient",        mysql   80  。

   NetworkPolicy    :
# kubectl create -f networkpolicy-allow-nginxclient.yaml

  4:       Pod,    Label "role=nginxclient",     Label。     Pod,  Nginx  ,         。
client1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: client1
  labels:
    role: nginxclient
spec:
  containers:
  - name: client1
    image: busybox
    command: [ "sleep", "3600" ]

client2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: client2
spec:
  containers:
  - name: client2
    image: busybox
    command: [ "sleep", "3600" ]

      Pods,                 。

            calico-policy-controller     ,calico-poliey-controller    k8s NetworkPolicy   ,  Pod  Label    ,                 calico-node  。
  calico-node   Pod        ,         。

    :

https://blog.csdn.net/watermelonbig/article/details/80720378http://cizixs.com/2017/10/19/docker-calico-network

    :         ,         !