The newest addition to my home lab is a Synology RS3413xs+ NAS. While installing it, I came across a couple of details that I didn't know before buying it. So for other people thinking of buying this unit, here's what I found out:
- If you add network interfaces in the available PCIe slot, they might be numbered _before_ the four onboard interfaces. They were in my case. So onboard 1-4 are eth2-5, and add-on interfaces 1-2 are eth0-1.
- the SSD cache feature only works with identical drives in the both cache slots. You can buy two 120GB SSDs, but you can't just add one 240GB SSD. Except if you configure it manually through the CLI, and want to work without Synology support.
- as explained in an earlier post, there's no multiple-VLAN-over-one-interface support in the GUI, but you can work around that in the CLI
- the DSM web interface counts VLAN-tagged packets twice in its "Total Network" graph. The per-interface/per-bond counters are correct however. PS that looks like the bug I solved three years ago in dstat 0.7.0!
- a Synology RAID group is used as an LVM volume group. Volumes and block-based iSCSI LUNs you create afterwards are implemented as LVM logical volumes. File-based iSCSI LUNs are just placed on formatted volumes like other files.
- the SSD cache can only be used for one LVM logical volume! Read on for a manual workaround.
- activating or deactivating the SSD cache for a volume means stopping all services temporarily.
- both SSDs are configured as a software RAID0 volume, with 64KB segments.
- the SSD partitions aren't aligned at all. Makes sense I guess. The regular disk partition for data is aligned at a 512MB boundary. PS the swap partition is aligned at a 128MB boundary, and the DSM root partition is aligned at 128KB.
- Synology implements its SSD cache feature using the "flashcache" driver in Linux (the one Facebook developed). Flashcache has three caching modes (writeback, writethrough, writearound) of which Synology currently uses writearound in DSM4.1. Just like writethrough this only accelerates read performance, as is clearly indicated in Synology documentation. If you insist on having write cache as well - with all the consequences that brings! - you could manually change this mode to writeback. Not supported ofcourse. See the flashcache doc for details on the three modes.
- if you absolutely need SSD cache for multiple volumes, another manual tweak is possible: dividing your SSDs into multiple partitions, making different md RAID0 devices from those, and activating those as flashcache for multiple volumes.
Get info from your own Synology device using:
# fdisk -u -l /dev/sdk; fdisk -u -l /dev/sdl
(sdk and sdl are the two SSDs in a 10-bay Synology, where sda..sdj are the 10 regular disks)
# cat /proc/mdstat
# dmsetup table cachedev_0
# dmsetup status cachedev_0
# vgdisplay -v vg1
Comments
Can you provide some details on how you use flashcache for multiple volumes?
I've managed to create the cache device but I'm unable to mount the volume.
Thanks!
I don't use multi-volume flashcache myself. I was just pointing out the theoretical possibility. Just like with a multi-VLAN setup (which I am doing), there's several hurdles to take. Some of them can be scripted in a shellscript using the internal synology commands, but many of those have not been fully documented, it seems. Have you looked at the commands in /usr/syno/sbin ?
Thanks for your reply.
I've looked at the synology commands but they are limited in functionalities.
I've managed to configure the cache using the "standard" flashcache commands, after creating different md devices manually.
I'm going to script it but I still have to find a good time to run my script in the boot process.
Thanks for sharing your experience, this is probably the best write up I have seen on the topic. I am hoping to get your advice on a topic. Like yourself I purchase a Synology for my home lab however I purchased the DS3612xs. While not officially supported for SSD caching don’t see any technical constraints that would prevent me manually creating the flashcache device. I currently have one single large volume upon which I use share files via CIFS as well as file based ISCSI to run a number of VM’s. I had planned on creating a second volume of two 500GB SSD’s but it seems silly to dedicate those devices to VM’s where I have to take into account space constraints etc. when I could use those same devices as flash cache. My question is if I were to create the flashcache device would the VM’s running on the ISCSI LUN benefit from the SSD caching (I assume individual block of the ISCSI file woudl be cached?) as well as the files I am serving via CIFS?
Ultimately the VM’s are my primary concern, I would like caching for CIFS users but that would be the icing on the cake.
Appreciate the input,
Adam
All IO going to that logical volume would benefit from the flashcache. The IO is going to the flashcache-enabled device, after all. If both iSCSI and CIFS activity are directed at that same volume (meaning the iSCSI LUNs are file-based, and on the same volume as the CIFS shares), both benefit.
One downside in the DSM 4.3 upgrade for me: my ethernet interfaces got renumbered, so my Synology showed other IP addresses on other VLANs, making it unreachable. Had to change switchport VLAN assignment to fix that.
# dmsetup table cachedev_0 | head -2
0 23403556352 flashcache-syno conf:
ssd dev (/dev/md3), disk dev (/dev/md2) cache mode(WRITE_BACK)