50台集群架构配置介绍-3(nfs)
企业NFS共享存储服务
介绍及作用位置:
NFS服务部署服务器准备:
这边我们开三台虚拟机来实践
1. 10.0.0.31 (nfs01) 作为服务器 2. 10.0.0.41 (backup)作为Client 1 3. 10.0.0.8 (web01) 作为Client 2
[root@nfs01 ~]# cat /etc/redhat-release
CentOS release 6.9 (Final)
[root@nfs01 ~]# uname -r
2.6.32-696.el6.x86_64
[root@nfs01 ~]# uname -m
x86_64
NFS 软体列表:
[root@backup yum]# cat /etc/yum.conf
[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=1 ------>这边原本是0改成1
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
查看目前已经下载的yum
[root@backup yum]# tree /var/cache/yum/x86_64/6/|grep "rpm"
│ │ ├── keyutils-1.4-5.el6.x86_64.rpm
│ │ ├── libevent-1.4.13-4.el6.x86_64.rpm
│ │ ├── libgssglue-0.1-11.el6.x86_64.rpm
│ │ ├── nfs-utils-1.2.3-75.el6.x86_64.rpm
│ │ └── nfs-utils-lib-1.1.5-13.el6.x86_64.rpm
│ ├── libtirpc-0.2.1-13.el6_9.x86_64.rpm
│ └── rpcbind-0.2.0-13.el6_9.1.x86_64.rpm
NFS开启服务:
[root@nfs01 ~]# /etc/init.d/rpcbind start
Starting rpcbind: [ OK ]
[root@nfs01 ~]# netstat -tunlp | grep rpc
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 2974/rpcbind
tcp 0 0 :::111 :::* LISTEN 2974/rpcbind
udp 0 0 0.0.0.0:605 0.0.0.0:* 2974/rpcbind
udp 0 0 0.0.0.0:111 0.0.0.0:* 2974/rpcbind
udp 0 0 :::605 :::* 2974/rpcbind
udp 0 0 :::111 :::* 2974/rpcbind
rpc 主端口是111
要怎么看有没有房源啊?
[root@nfs01 ~]# rpcinfo -p localhost
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
---->上面都只有自己的端口
开启nfs服务
[root@nfs01 ~]# /etc/init.d/nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS mountd: [ OK ]
Starting NFS daemon: [ OK ]
Starting RPC idmapd: [ OK ]
然后我们再看看rpc 有没有房了?
[root@nfs01 ~]# rpcinfo -p localhost
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100011 1 udp 875 rquotad
100011 2 udp 875 rquotad
100011 1 tcp 875 rquotad
100011 2 tcp 875 rquotad
100005 1 udp 40802 mountd
100005 1 tcp 46645 mountd
100005 2 udp 53778 mountd
100005 2 tcp 54941 mountd
100005 3 udp 59941 mountd
100005 3 tcp 40293 mountd
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 2 tcp 2049 nfs_acl
100227 3 tcp 2049 nfs_acl
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100227 2 udp 2049 nfs_acl
100227 3 udp 2049 nfs_acl
100021 1 udp 44455 nlockmgr
100021 3 udp 44455 nlockmgr
100021 4 udp 44455 nlockmgr
100021 1 tcp 43293 nlockmgr
100021 3 tcp 43293 nlockmgr
100021 4 tcp 43293 nlockmgr
我们之前说过,每个进程都会有所谓的虚拟用户,当我们在yum安装时,变生成的下面的id 用户
[root@nfs01 ~]# id nobody
uid=99(nobody) gid=99(nobody) groups=99(nobody)
[root@nfs01 ~]# id nfsnobody
uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)
[root@nfs01 ~]# chkconfig nfs on
[root@nfs01 ~]# chkconfig rpcbind on
总结:先启动rpcbind,再启动nfs服务。再把他们加入开机自启动
查看一下nfs 跟rpc 的开机自启动部分:
[root@nfs01 ~]# ls /etc/rc.d/rc3.d/ | grep -E "nfs|rpc"
K61nfs-rdma
K69rpcsvcgssd
S13rpcbind
S14nfslock
S19rpcgssd
S30nfs
实践配置NFS:
NFS服务的默认配置文件路径为:/etc/exports ,并且默认为空的。
[root@nfs01 ~]# ls -l /etc/exports
-rw-r--r--. 1 root root 0 Jan 12 2010 /etc/exports
EXAMPLE
# sample /etc/exports file
/ master(rw) trusty(rw,no_root_squash)
/projects proj*.local.domain(rw)
/usr *.local.domain(ro) @trusted(rw)
/home/joe pc001(rw,all_squash,anonuid=150,anongid=100)
/pub *(ro,insecure,all_squash)
/srv/www -sync,rw server @trusted @external(ro)
/foo 2001:db8:9:e54::/64(rw) 192.0.2.0/24(rw)
/build buildhost[0-9].local.domain(rw)
上述各个列的参数含义如下:
[root@nfs01 ~]# mkdir /data -p
[root@nfs01 ~]# id nfsnobody
uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)
[root@nfs01 ~]# ls -ld /data
drwxr-xr-x. 2 root root 4096 Oct 13 14:33 /data
[root@nfs01 ~]# chown -R nfsnobody.nfsnobody /data
[root@nfs01 ~]# ls -ld /data
drwxr-xr-x. 2 nfsnobody nfsnobody 4096 Oct 13 14:33 /data
[root@nfs01 ~]# vim /etc/exports
[root@nfs01 ~]# cat /etc/exports
#shared /data by oldboy for bingbing at 20171013
/data 172.16.1.0/24(rw,sync)
这样上述的基本配置就处理完了。配置完后,要检查一下状态是否正确:
[root@nfs01 ~]# /etc/init.d/rpcbind status
rpcbind (pid 2974) is running...
[root@nfs01 ~]# /etc/init.d/nfs status
rpc.svcgssd is stopped
rpc.mountd (pid 3033) is running...
nfsd (pid 3049 3048 3047 3046 3045 3044 3043 3042) is running...
rpc.rquotad (pid 3028) is running...
[root@nfs01 ~]# rpcinfo -p localhost
program vers proto port service
100000 4 tcp 111 portmappe
...不全部列出
[root@nfs01 ~]# /etc/init.d/nfs reload (平滑重启服务)
**上面的平滑重启命令 等价于 exportfs -rv **
上面平滑重启后,我们要检查服务到底有没有真正挂载上去:
[root@nfs01 ~]# showmount -e 172.16.1.31
Export list for 172.16.1.31:
/data 172.16.1.0/24 ---->有出现这个,就表示服务端是启动正常的!
企业生产场景 NFS exports 配置实例 配置(客户端):
这边我们的环境是在10.0.0.8(web01)
客户端一样要先安装:nfs-utils , rpcbind 软件包
[root@web01 ~]# /etc/init.d/rpcbind start
[root@web01 ~]# /etc/init.d/rpcbind status
rpcbind (pid 1210) is running...
[root@web01 ~]# chkconfig rpcbind on
[root@web01 ~]# chkconfig --list rpcbind
rpcbind 0:off 1:off 2:on 3:on 4:on 5:on 6:of
连线检查:
[root@web01 ~]# showmount -e 172.16.1.31
Export list for 172.16.1.31:
/data 172.16.1.0/24
[root@web01 ~]# telnet 172.16.1.31 111
Trying 172.16.1.31...
Connected to 172.16.1.31.
挂载
[root@web01 ~]# mount -t nfs 172.16.1.31:/data /mnt
[root@web01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 6.6G 1.5G 4.8G 23% /
tmpfs 491M 0 491M 0% /dev/shm
/dev/sda1 190M 35M 146M 19% /boot
172.16.1.31:/data 6.6G 1.6G 4.7G 25% /mnt
测试:
[root@web01 mnt]# touch oldboy.txt
[root@web01 mnt]# echo "good" >> oldboy.txt
回到服务器里面看:
[root@nfs01 ~]# ls /data/
oldboy.txt
[root@nfs01 ~]# cat /data/oldboy.txt
good
来源: http://www.bubuko.com/infodetail-2368734.html