-
Notifications
You must be signed in to change notification settings - Fork 5
Cloyne's Setup: Domains and Servers
Cloyne owns the following domain names:
cloyne.org
savecloyne.com
In the past, cloyne.net
was used for things accessible only inside the local network, and cloyne.org
for thing accessible publicly, but that lead to the confusion by members. So now we are using cloyne.org
for everything and cloyne.net
was just a redirect to cloyne.org
for things which were before there. In Spring 2022, cloyne.net
was allowed to expire.
To configure DNS entries for domain names, one has to just edit the settings in the namecheap interface. We previously ran our own DNS servers, and configuration files in our Docker image can be found there from before we switched to Sonic.
In general, once a docker image configuration is changed on github, the Docker image should be automatically rebuilt by GitHub and pushed to Docker Hub. After that, it is only necessary to push the new image to server2 via salt.
All servers are by itself just Docker hosts and no real functionality runs directly on hosts. All functionality is packed in Docker images which we run on servers.
To configure Docker images which run on servers we use orchestration tool Salt. Configuration
for all servers is thus stored in the repository. Configuration describes a state
we want a server to be in (which Docker images should run, what volumes should be mounted, etc.) and by running
salt-ssh '<servername>' state.highstate
Salt configures the server to match the wanted state.
To be able to run the state.highstate
command one has to add their public SSH key to server's cloyne
user ~/.ssh/authorized_keys
file. Then you can login into the server without typing your password, and salt-ssh
can do the same.
Internal IP: 192.168.88.11 (eth0)
Hostname: server1.cloyne.org
Login: username cloyne + sudo su for root
Running Ubuntu LTS distribution as a host for Docker images. Services:
- Secondary DNS server (using cloyne/powerdns-secondary Docker image)
Partitions:
- root:
/dev/disk/by-uuid/5d604660-e02f-41e8-8f39-877a38f32f67
Hostname: server2.cloyne.org
Internal IP: 192.168.88.12 (eth1)
Login: username cloyne + sudo su for root
Running Ubuntu LTS distribution as a host for Docker images. Services:
- Primary DNS server (using cloyne/powerdns-master Docker image)
- Mail server (Postfix) (using cloyne/postfix Docker image)
- MySQL (using tozd/mysql Docker image)
- PostgreSQL (using tozd/postgresql Docker image)
- Nginx reverse proxy (using cloyne/web Docker image)
- phpMyAdmin (using tozd/phpmyadmin Docker image)
- phpPgAdmin (using tozd/phppgadmin Docker image)
- Cloyne.org blog (Wordpress) (using cloyne/blog Docker image)
- local iperf server (using tozd/iperf Docker image)
Partitions:
- root:
/dev/sdg1
-
/srv
:/dev/md1
-
/srv/mnt
:/dev/md0
(used for daily local backup of files and databases, using tozd/rdiff-backup Docker image)
$ cat /proc/mdstat
md1 : active raid1 sdb[1] sda[0]
488255488 blocks super 1.2 [2/2] [UU]
bitmap: 3/4 pages [12KB], 65536KB chunk
md0 : active raid1 sdc[0] sdd[1]
488255488 blocks super 1.2 [2/2] [UU]
bitmap: 4/4 pages [16KB], 65536KB chunk
Hostname: server3.cloyne.org
Internal IP: 192.168.88.13 (p1p1)
Login: username cloyne + sudo su for root
Running Ubuntu LTS distribution as a host for Docker images. It contains 8 x 3 TB hard drives, 6 x 750 GB drives, configured in pairs into RAID-1, combined into a 13 TB LVM volume. Services:
- Rocket.Chat (using cloyne/rocketchat Docker image)
- Minecraft server (using cloyne/minecraft Docker image)
- ownCloud (using cloyne/owncloud Docker image)
- local iperf server (using tozd/iperf Docker image)
- nodewatcher (TODO)
One hard drive bay (8) is currently empty because of a failed hard drive. Its mirror (/dev/sdg1
, bay 5, 750 GB) can be used as a replacement for some other drive when needed.
Partitions:
- root:
/dev/sda1
-
/srv
:/dev/mapper/vg0-srv
$ cat /proc/mdstat
md5 : active raid1 sdk1[3] sdj1[2]
732277568 blocks super 1.2 [2/2] [UU]
md7 : active raid1 sdp1[1] sdo1[0]
732277568 blocks super 1.2 [2/2] [UU
md6 : active raid1 sdn1[1] sdm1[0]
732277568 blocks super 1.2 [2/2] [UU]
md2 : active raid1 sdh1[1] sdf1[2]
2929542976 blocks super 1.2 [2/2] [UU]
md3 : active raid1 sdl1[1] sdi1[0]
2929542976 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sdb1[3] sdc1[2]
2929542976 blocks super 1.2 [2/2] [UU]
md1 : active raid1 sde1[2] sdd1[3]
2929542976 blocks super 1.2 [2/2] [UU]
$ lvdisplay --maps
--- Logical volume ---
LV Path /dev/vg0/srv
LV Name srv
VG Name vg0
LV UUID UvYIg3-QMId-m19Y-BeQ5-DtQV-QSPK-znMAFt
LV Write Access read/write
LV Creation host, time server3, 2015-05-16 23:15:51 -0700
LV Status available
# open 1
LV Size 12.86 TiB
Current LE 3371192
Segments 7
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0
--- Segments ---
Logical extent 0 to 715219:
Type linear
Physical volume /dev/md0
Physical extents 0 to 715219
Logical extent 715220 to 1430439:
Type linear
Physical volume /dev/md1
Physical extents 0 to 715219
Logical extent 1430440 to 2145659:
Type linear
Physical volume /dev/md2
Physical extents 0 to 715219
Logical extent 2145660 to 2860879:
Type linear
Physical volume /dev/md3
Physical extents 0 to 715219
Logical extent 2860880 to 3039657:
Type linear
Physical volume /dev/md7
Physical extents 0 to 178777
Logical extent 3039658 to 3218435:
Type linear
Physical volume /dev/md5
Physical extents 0 to 178777
Logical extent 3218436 to 3371191:
Type linear
Physical volume /dev/md6
Physical extents 0 to 152755
$ pvs -o+pv_used
PV VG Fmt Attr PSize PFree Used
/dev/md0 vg0 lvm2 a-- 2.73t 0 2.73t
/dev/md1 vg0 lvm2 a-- 2.73t 0 2.73t
/dev/md2 vg0 lvm2 a-- 2.73t 0 2.73t
/dev/md3 vg0 lvm2 a-- 2.73t 0 2.73t
/dev/md5 vg0 lvm2 a-- 698.35g 0 698.35g
/dev/md6 vg0 lvm2 a-- 698.35g 101.65g 596.70g
/dev/md7 vg0 lvm2 a-- 698.35g 0 698.35g
$ ~/files/tw_cli/tw_cli /c2 show
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
------------------------------------------------------------------------------
u0 SINGLE VERIFYING - 75% - 2793.96 Ri ON
u1 SINGLE VERIFYING - 75% - 2793.96 Ri ON
u2 SINGLE VERIFYING - 30% - 2793.96 Ri ON
u3 SINGLE VERIFYING - 0% - 2793.96 Ri ON
u4 SINGLE VERIFY-PAUSED - 0% - 2793.96 Ri ON
u5 SINGLE VERIFY-PAUSED - 0% - 698.481 Ri ON
u6 SINGLE VERIFY-PAUSED - 0% - 2793.96 Ri ON
u7 SINGLE VERIFY-PAUSED - 0% - 2793.96 Ri ON
u8 SINGLE VERIFY-PAUSED - 0% - 698.481 Ri ON
u9 SINGLE VERIFY-PAUSED - 0% - 698.481 Ri ON
u10 SINGLE VERIFY-PAUSED - 0% - 2793.96 Ri ON
u11 SINGLE VERIFY-PAUSED - 0% - 698.481 Ri ON
u12 SINGLE VERIFY-PAUSED - 0% - 698.481 Ri ON
u13 SINGLE VERIFY-PAUSED - 0% - 698.481 Ri ON
u14 SINGLE VERIFY-PAUSED - 0% - 698.481 Ri ON
VPort Status Unit Size Type Phy Encl-Slot Model
------------------------------------------------------------------------------
p0 VERIFYING u0 2.73 TB SATA 0 - WDC WD30EFRX-68EUZN0
p1 VERIFYING u1 2.73 TB SATA 1 - WDC WD30EFRX-68EUZN0
p2 VERIFYING u2 2.73 TB SATA 2 - WDC WD30EFRX-68EUZN0
p3 VERIFYING u3 2.73 TB SATA 3 - WDC WD30EFRX-68EUZN0
p4 VERIFYING u4 2.73 TB SATA 4 - WDC WD30EFRX-68EUZN0
p5 VERIFYING u5 698.63 GB SATA 5 - ST3750640NS
p6 VERIFYING u6 2.73 TB SATA 6 - WDC WD30EFRX-68EUZN0
p7 VERIFYING u7 2.73 TB SATA 7 - WDC WD30EFRX-68EUZN0
p9 VERIFYING u8 698.63 GB SATA 9 - ST3750640NS
p10 VERIFYING u9 698.63 GB SATA 10 - ST3750640NS
p11 VERIFYING u10 2.73 TB SATA 11 - WDC WD30EFRX-68EUZN0
p12 VERIFYING u11 698.63 GB SATA 12 - ST3750640NS
p13 VERIFYING u12 698.63 GB SATA 13 - ST3750640NS
p14 VERIFYING u13 698.63 GB SATA 14 - ST3750640NS
p15 VERIFYING u14 698.63 GB SATA 15 - ST3750640NS
Name OnlineState BBUReady Status Volt Temp Hours LastCapTest
---------------------------------------------------------------------------
bbu On Yes OK OK OK 0 xx-xxx-xxxx
VPort
tells you which hard drive bay a disk is in. Unit
tells you under which SCSI number it is available in the system.
Using that you can see under which device filename you can a hard drive. For example, drive in bay 11 is unit 10, so greping dmesg | grep 'sd 2:0:10:0'
gives you sd 2:0:10:0: [sdl] 5859352576 512-byte logical blocks: (3.00 TB/2.73 TiB)
, so /dev/sdl
is the device filename under which the drive is available. VPort and Unit can get out of sync and order. You can try to reorder them and get them in sync by moving them around in hardware RAID BIOS, but it takes a lot of time because the interface is bugy and you hve to move them around one by one, repeating many times, until changes stick correctly.
On the other hand, smartctl
operates on VPort
numbers. So for the drive in bay 11, you can access its SMART information using smartctl -a -d 3ware,11 /dev/twa0
.