External SSDs aren't working (registered under only one disk)

Hello everyone !

Installation
I am using a Raspberry Pi 4B Rev 1.5 2GB, with the stable version of FreedomBox (installed on a 64GB SanDisk Ultra MicroSD via Raspberry Pi Imager. Works fine, behind a router, DMZ set).
I wanted to use it as a storage device, so i bought 4 SSDs Fanxiang S101Q 1T (they are all the same, bought them x4 on Amazon). They are internal SSDs (I designed a box to put them in, internal ssds are generally smaller and have the exact same size so it’s easier to replace). With them, I’m using a SATA to USB converter.
I just want to note that all SDDs need 1A of power, and i’m using a 27W power supply (5.1V 5A).

Problem Description
The problem is, they all have the exact same ID name (Fanxiang S101Q 1TB (000000000069)).
When only one is connected to the Pi, i can control it, like modify the partitions.
But when 2 or more are plugged, they are “registered” under a single disk, and make control or modifications impossible. I wanted to use them in RAID 5, so i tried to delet all partitions from the disks one by one, then connect all of them, but the os needs to create partitions on disk and is unable to do that, and i don’t know why (he maybe can’t make the difference between the disks).

I tried to modify some disk datas, but they have already have a different serial number, so i don’t know what to do.
If you have any idea…

Thanks by advance

Your screen shows four devices, /dev/sda to /dev/sdd, so they are all clearly visible. Did you try something like what is described at Chapter 3. Managing RAID devices using Cockpit | Red Hat Product Documentation? I don’t know cockpit, so I cannot help much more with it.

Alternatively, you may connect with ssh and do the configuration with command line. One good example to do that is described at Battle testing ZFS, Btrfs and mdadm+dm-integrity. This does a bit more, I suspect cockpit would just do mdadm directly. This post basically says that the performance of mdadm is less than zfs or btrfs, but I still prefer mdadm for several reasons (note: don’t use the raid5 feature of btrfs, it has serious issues).

1 Like

I tried to follow the chapter 3, but the problem is that the 4 disks are show on the previous screen, but don’t appear everywhere else.
I also tried commands found in the link you send, but most of them don’t exist with FreedomBox.


e

You need to install the mdadm, cryptsetup and cryptsetup-bin packages:

sudo apt install mdadm cryptsetup cryptsetup-bin

After this, things should work with command line. There is a chance that it also make things work with cockpit but I don’t know (restart the freedombox after installing the package if you want to try with cockpit).

With command line, first check your devices names with lsblk (usually, the size allows to identify which is which). If your first device is /dev/sda:

sudo integritysetup format --integrity sha256 /dev/sda

This may take a really long time.

You need to do this for each device to be used in the RAID volume one by one.

After that, you can do, for each device:

integritysetup open --integrity sha256 /dev/sda sda_int

(if your next device is /dev/sdb, then integritysetup open --integrity sha256 /dev/sdb sdb_int, etc).

Then create the volume (change the names if needed):

sudo mdadm --create --verbose --assume-clean --level=5 --raid-devices=4 /dev/md/raid5 /dev/mapper/sda_int /dev/mapper/sdb_int /dev/mapper/sdc_int /dev/mapper/sdd_int

Then make a file system:

sudo mkfs.ext4 /dev/md/raid5

If nedeed, restart the freedombox so that it sees the new volume.

One explanation: strictly speaking, the mdadm command directly on the devices is sufficient. That would be good to deal with the case where one disk is reporting to have problems.

However, there is another type of issue that may happen: a disk does not have problems but data were modified. In such a case, without integrity protection, mdadm (that handles the raid) would see that there are two versions of the same data block but can’t know which one is the correct version. The integrity protection allows detecting that data are wrong, even if the disk is ok. That way, it is possible to correct the wrong data, thanks to the raid5.

1 Like

Hey !
Thanks for your answer and sorry for this big delay…

Everything is installed (step1), but the step 2 is going to take a while (the first disk took at least 10h, there is 3 left)
I will put here the progress of all of this.
Have a good day !

So after a while, i continued to setup the disks, but when i want to create a RAID via the termial or SSH, i get this :

mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 941174784K
mdadm: automatically enabling write-intent bitmap on large array
mdadm: Defaulting to version 1.2 metadata
mdadm: Failed to write metadata to /dev/mapper/sda_int

When i try via cockpit, I get 3 working disk and one recovering forever, then it shows an error : failed to create RAID.

I think the --integrity sha 256 is only for format, not for open, my mistake.

Can you reboot (in a clean manner, via the top right menu in plinth), run integritysetup open /dev/sdX sdX_int for each disk and then the mdadm command again?

this is a longshot but just wanted to share if it makes any difference.

out of the 4 usb ports on the pi, 2 of them are usb 3.0. raspi usually has trouble with the firmware on usb3.0 cables (some chipsets dont work).

could you try with 2 of your disks just connected to the usb 2.0 ports to see if it makes a difference?

1 Like

Hey !
It actually makes no differences, the 2.0 works like the 3.0, the disks recognition and partition problem is the same.

Hey (V.2)
I had an issue when i first tried the RAID command, leading me to entierly restart disks formatting, so testing your new command may take more time than expected…
Hopping it will work !