Nas4Free and support for Mellanox ConnectX 10GbE
I have been implementing Nas4Free recently, and found this system to be a very nice one. I might try to port its web interface to Linux, as it completes a set of requirements (regarding graphic interface) I do not find in Linux, and wish I could…
However, I have had to add a driver for ConnectX 10GbE interface, which, unfortunately, was not included.
This might show as a simple task, however, for a person unfamiliar with FreeBSD, it was a challenge.
I have followed the steps described in this build-your-own Nas4Free wiki guide, except a minor change:
I have edited the file /usr/local/nas4free/svn/build/kernel-config/NAS4FREE-amd64 (attached: NAS4FREE-amd64).
I did not build the system beyond compiling the kernel, as I needed only the kernel – I needed to implant the kernel into an embedded system. The procedure was as follow:
- On the compilation server, run: cat /usr/obj/nas4free/usr/src/sys/NAS4FREE-amd64/kernel | gzip -n -9 > /tmp/kernel.gz
- On the target system: note the device for the mount /cf
- umount /cf (because it’s read-only)
- mount the device noted before to /cf (now /cf is writable)
- Copy from compilation server the file /tmp/kernel.gz to /cf/boot/kernel/ overwriting the existing file
- umount /cf
- Reboot
This is how to compile with extended support and to push it into an embedded Nas4Free system. Note, however, that the version of my Nas4Free was 9.1.0.1.636. Newer versions might include the Mellanox drivers, and this operation will be obsolete.
What about the userland tools of infiniband?
Are they all in kernel?
There are no userland tools in the kernel. It’s by-definition in the userland. Since I didn’t want to use IB, but Mellanox 10GbE card, I have had to use their drivers, which include strong bindings to infiniband components. I wasn’t happy with it, though, and as I commented earlier, I dropped N4F pretty fast when I understood its constrains.
Ez
Hi,
Good article. It would have saved me time back a few months ago when I was trying to compile OFED into N4F.
Now i’m using vanilla FreeBSD.
I wanted to achieve the same thing using Mellanox ConnectX EN cards but I appear to get the following error when I try to configure an IP using ifconfig:
mlx4_core0: command 0x34 failed fw status 0x3
I got the same thing when I rolled my own nas4free.
I followed the directions here to include support for ConnectX cards:
https://wiki.freebsd.org/InfiniBand
Did you use the same resource to include mellanox support in kernel? If so, what model ConnectX card?
I also hope for the nas4free devs to include the drivers by the Nas4Free DEVs!
Thank you for your help!
Regards,
Morgan
Sorry for the delayed response. I have been rather busy, and it just didn’t ocure to me to login and approve comments.
About Mellanox – it’s ConnectX2 or ConnectX3, as far as I remember. I was very unhappy with Nas4Free performance – the lack of autodetection of 4K block-size disks, and the fact that any minor change of network settings requires reboot. This cannot perform as the storage layer for a virtualization farm, right?
So it was migrated later on to a Linux, which is my comfort-zone, and recreated correctly. Under Centos6, the system functions rather well. There is a minor bug with unorderly shutdown of the system and ZFS availablility at an early stage (I use it to host /var and /tmp of this system, in addition to the external served NFS shares). The only thing I wasnt’ very happy with was that with a 10GbE network, and a very fast local ZFS performance, NFS was rather slow. Instead of sequencial performance of about 600-800MB/s, I got through NFS only about 200MB/s, which is a major downgrade. You would expect me to do better.
This system will not go through any major change, however, through my next systems I will have to heed the two said issues.
Ez