Volumes 如何实现SeaveedFS卷服务器的HA?

Volumes 如何实现SeaveedFS卷服务器的HA?,volumes,seaweedfs,Volumes,Seaweedfs,我在同一机架中有两个卷服务器。我的复制是001,如果其中一个卷服务器已关闭,因为复制是001,所以上载功能将不可用。如何确保卷服务器的高可用性?如果我修复了崩溃节点,数据会自动同步吗?如果是,在同步过程中,当请求传入时,请求是否会自动切换到另一个好节点 我已经在同一机架上运行到kubernetes中的卷服务器。其中一个吊舱总是重新启动。日志中没有明显的错误 系统设置 大师: 卷服务器1: /usr/bin/weed volume -mserver=weedfs-master:9333 -max=

我在同一机架中有两个卷服务器。我的复制是001,如果其中一个卷服务器已关闭,因为复制是001,所以上载功能将不可用。如何确保卷服务器的高可用性?如果我修复了崩溃节点,数据会自动同步吗?如果是,在同步过程中,当请求传入时,请求是否会自动切换到另一个好节点

我已经在同一机架上运行到kubernetes中的卷服务器。其中一个吊舱总是重新启动。日志中没有明显的错误

系统设置 大师:

卷服务器1:

/usr/bin/weed volume -mserver=weedfs-master:9333 -max=500 -publicUrl=https://file-storage-exhibition.ssiid.com -ip=weedfs-volume-1 -port=8080 -rack=rack1 -dir=/data -max=0
ports: - containerPort: 8080 args: - volume - -mserver=weedfs-master:9333 - -max=500 - -publicUrl=https://file-storage-exhibition.ssiid.com - -ip=weedfs-volume-1 - -port=8080 - -rack=rack1 env: - name: TZ
卷服务器2:

/usr/bin/weed volume -mserver=weedfs-master:9333 -max=500 -publicUrl=https://file-storage-exhibition.ssiid.com -ip=weedfs-volume-2 -port=8080 -rack=rack1 -dir=/data -max=0
ports: - containerPort: 8080 args: - volume - -mserver=weedfs-master:9333 - -max=500 - -publicUrl=https://file-storage-exhibition.ssiid.com - -ip=weedfs-volume-2 - -port=8080 - -rack=rack1 env: - name: TZ
操作系统版本 版本30GB 1.75 linux amd6

始终启动的吊舱日志:

I0513 11:03:04     1 file_util.go:20] Folder /data Permission: -rwxr-xr-x
I0513 11:03:04     1 volume_loading.go:104] loading index /data/5.idx to memory
I0513 11:03:04     1 disk_location.go:81] data file /data/5.idx, replicaPlacement=001 v=3 size=8 ttl=
I0513 11:03:04     1 volume_loading.go:104] loading index /data/6.idx to memory
I0513 11:03:04     1 disk_location.go:81] data file /data/6.idx, replicaPlacement=001 v=3 size=8 ttl=
I0513 11:03:04     1 volume_loading.go:104] loading index /data/7.idx to memory
I0513 11:03:04     1 disk_location.go:81] data file /data/7.idx, replicaPlacement=001 v=3 size=8 ttl=
I0513 11:03:04     1 volume_loading.go:104] loading index /data/2.idx to memory
I0513 11:03:04     1 volume_loading.go:104] loading index /data/3.idx to memory
I0513 11:03:04     1 disk_location.go:81] data file /data/3.idx, replicaPlacement=001 v=3 size=8 ttl=
I0513 11:03:04     1 volume_loading.go:104] loading index /data/4.idx to memory
I0513 11:03:04     1 disk_location.go:81] data file /data/4.idx, replicaPlacement=001 v=3 size=8 ttl=
I0513 11:03:04     1 disk_location.go:81] data file /data/2.idx, replicaPlacement=001 v=3 size=8 ttl=
I0513 11:03:04     1 disk_location.go:117] Store started on dir: /data with 6 volumes max 0
I0513 11:03:04     1 disk_location.go:120] Store started on dir: /data with 0 ec shards
I0513 11:03:04     1 volume.go:279] Start Seaweed volume server 30GB 1.75 at 0.0.0.0:8080
I0513 11:03:04     1 volume_grpc_client_to_master.go:27] Volume server start with seed master nodes: [weedfs-master:9333]
I0513 11:03:04     1 volume_grpc_client_to_master.go:71] Heartbeat to: weedfs-master:9333
I0513 11:03:04     1 disk.go:11] read disk size: dir:"/data" all:527253700608 used:6518046720 free:520735653888 percent_free:98.76377 percent_used:1.2362258 
I0513 11:03:04     1 store.go:430] disk /data max 483 unclaimedSpace:490468MB, unused:6143MB volumeSizeLimit:1024MB
I0513 11:05:24     1 volume.go:205] graceful stop cluster http server, elapsed [0]
volume server has be killed
I0513 11:05:24     1 volume.go:210] graceful stop gRPC, elapsed [0]
I0513 11:05:24     1 volume_server.go:104] Shutting down volume server...
I0513 11:05:24     1 volume_server.go:106] Shut down successfully!
I0513 11:05:24     1 volume.go:215] stop volume server, elapsed [0]
如何确保卷服务器的高可用性

添加两个以上的卷服务器

如果我修复了崩溃节点,数据会自动同步吗

写入将失败。但是写入请求应该从主服务器获得新的分配,并转到其他卷服务器进行写入。不需要同步

如何确保卷服务器的高可用性

添加两个以上的卷服务器

如果我修复了崩溃节点,数据会自动同步吗

写入将失败。但是写入请求应该从主服务器获得新的分配,并转到其他卷服务器进行写入。不需要同步

I0513 11:03:04     1 file_util.go:20] Folder /data Permission: -rwxr-xr-x
I0513 11:03:04     1 volume_loading.go:104] loading index /data/5.idx to memory
I0513 11:03:04     1 disk_location.go:81] data file /data/5.idx, replicaPlacement=001 v=3 size=8 ttl=
I0513 11:03:04     1 volume_loading.go:104] loading index /data/6.idx to memory
I0513 11:03:04     1 disk_location.go:81] data file /data/6.idx, replicaPlacement=001 v=3 size=8 ttl=
I0513 11:03:04     1 volume_loading.go:104] loading index /data/7.idx to memory
I0513 11:03:04     1 disk_location.go:81] data file /data/7.idx, replicaPlacement=001 v=3 size=8 ttl=
I0513 11:03:04     1 volume_loading.go:104] loading index /data/2.idx to memory
I0513 11:03:04     1 volume_loading.go:104] loading index /data/3.idx to memory
I0513 11:03:04     1 disk_location.go:81] data file /data/3.idx, replicaPlacement=001 v=3 size=8 ttl=
I0513 11:03:04     1 volume_loading.go:104] loading index /data/4.idx to memory
I0513 11:03:04     1 disk_location.go:81] data file /data/4.idx, replicaPlacement=001 v=3 size=8 ttl=
I0513 11:03:04     1 disk_location.go:81] data file /data/2.idx, replicaPlacement=001 v=3 size=8 ttl=
I0513 11:03:04     1 disk_location.go:117] Store started on dir: /data with 6 volumes max 0
I0513 11:03:04     1 disk_location.go:120] Store started on dir: /data with 0 ec shards
I0513 11:03:04     1 volume.go:279] Start Seaweed volume server 30GB 1.75 at 0.0.0.0:8080
I0513 11:03:04     1 volume_grpc_client_to_master.go:27] Volume server start with seed master nodes: [weedfs-master:9333]
I0513 11:03:04     1 volume_grpc_client_to_master.go:71] Heartbeat to: weedfs-master:9333
I0513 11:03:04     1 disk.go:11] read disk size: dir:"/data" all:527253700608 used:6518046720 free:520735653888 percent_free:98.76377 percent_used:1.2362258 
I0513 11:03:04     1 store.go:430] disk /data max 483 unclaimedSpace:490468MB, unused:6143MB volumeSizeLimit:1024MB
I0513 11:05:24     1 volume.go:205] graceful stop cluster http server, elapsed [0]
volume server has be killed
I0513 11:05:24     1 volume.go:210] graceful stop gRPC, elapsed [0]
I0513 11:05:24     1 volume_server.go:104] Shutting down volume server...
I0513 11:05:24     1 volume_server.go:106] Shut down successfully!
I0513 11:05:24     1 volume.go:215] stop volume server, elapsed [0]