My Pioneer FreedomBox Home Server has been running since 2018, on an freedombox-testing-free_latest_a20-olinuxino-lime2-armhf.img . It is plugged into my home router, whose DHCP server is configured to only give out IPv4 addresses. I see the green light next to the network plug at the router, and blinking yellow and green lights next to the plug at the freedombox. I also checked the cable with a different device.
Three days ago the freedombox stopped being reachable via Pagekite and local IPv4 address. I used to see it (and its IPv4 address) in the client list of the routerās web interface, but it is now missing there. When I log in locally via HDMI monitor and USB keyboard, I see with āip aā that the eth0 device has now only got a link-local IPv6 address (starting with fe80).
Restarting the router (to be sure), then the freedombox (and waiting for 10 minutes after each start) did not change the situation. What can I do?
My plan would now be to install a fresh image on another SD card and restore my last backup there, but Iād be happy if I could avoid this effort.
The image freedombox-testing-free_latest_a20-olinuxino-lime2-armhf.img of 2021-08-20 showed the same problem.
In the meantime I have noticed that both blinkenlights at the FreedomBoxās ethernet port both blinked somehow randomly. If I understand correctly, one of them should glow continuously. So I tried an USB-LAN-Adapter instead. With this, the connectivity is restored!
The upgrade log tells that a 5.10 kernel was installed (but no reboot happened) shortly before the problem appeared. Perhaps Iāll test a freedombox-stable-free_buster_a20-olinuxino-lime2-armhf.img (containing a 4.19 kernel). I wonder whether this would show the problem or not.
If you have Pioneer Edition hardware please use the images meant specifically for that hardware instead. They differ from the lime2 images by a small fix in u-boot to get Ethernet working. You can try the latest weekly images if you want bullseye.
Thank you, @sunil! I have now tried the suggested pioneer images (of 2019-07-19 and 2021-08-23) on another SD card. Unfortunately, I still get no Ethernet connection.
For now, my workaround is to use an USB-LAN-Adapter instead.
Next I will make a dd clone of my original card as backup, then try to rollback the system to a āstorage snapshotā that still has the old kernel. I wonder whether this will help.
I am still on Freedombox version 21.4.4.
Trying to restore a storage snapshot from end of July yields:
500
This is an internal error and not something you caused or can fix. Please report the error on the bug tracker so we can fix it. Also, please attach the status log to the bug report.
I did create an issue yet. The status log says
Cannot detect ambit since default subvolume is unknown.
This can happen if the system was not set up for rollback.
The ambit can be specified manually using the --ambit option.
tail of the status log
Jan 15 21:40:31 freedombox /usr/bin/plinth[512]: # snapshot list
Jan 15 21:40:31 freedombox sudo[2503]: plinth : PWD=/ ; USER=root ; COMMAND=/usr/share/plinth/actions/snapshot list
Jan 15 21:40:31 freedombox sudo[2503]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jan 15 21:40:33 freedombox sudo[2503]: pam_unix(sudo:session): session closed for user root
Jan 15 21:40:48 freedombox /usr/bin/plinth[512]: # snapshot rollback 23567
Jan 15 21:40:48 freedombox sudo[2511]: plinth : PWD=/ ; USER=root ; COMMAND=/usr/share/plinth/actions/snapshot rollback 23567
Jan 15 21:40:48 freedombox sudo[2511]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jan 15 21:40:49 freedombox sudo[2511]: pam_unix(sudo:session): session closed for user root
Jan 15 21:40:49 freedombox /usr/bin/plinth[512]: Error executing command - ['sudo', '-n', '/usr/share/plinth/actions/snapshot', 'rollback', '23567'], , Cannot detect ambit since default subvolume is unknown.
This can happen if the system was not set up for rollback.
The ambit can be specified manually using the --ambit option.
Traceback (most recent call last):
File "/usr/share/plinth/actions/snapshot", line 299, in <module>
main()
File "/usr/share/plinth/actions/snapshot", line 295, in main
subcommand_method(arguments)
File "/usr/share/plinth/actions/snapshot", line 286, in subcommand_rollback
subprocess.run(command, check=True)
File "/usr/lib/python3.9/subprocess.py", line 528, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['snapper', 'rollback', '--description', 'created by rollback', '23567']' returned non-zero exit status 1.
Jan 15 21:40:49 freedombox /usr/bin/plinth[512]: Internal Server Error: /plinth/sys/snapshot/23567/rollback
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/usr/lib/python3/dist-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/usr/lib/python3/dist-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/lib/python3/dist-packages/plinth/modules/snapshot/views.py", line 206, in rollback
actions.superuser_run('snapshot', ['rollback', number])
File "/usr/lib/python3/dist-packages/plinth/actions.py", line 104, in superuser_run
return _run(action, options, input, run_in_background, True,
File "/usr/lib/python3/dist-packages/plinth/actions.py", line 200, in _run
raise ActionError(action, output, error)
plinth.errors.ActionError: ('snapshot', '', 'Cannot detect ambit since default subvolume is unknown.\nThis can happen if the system was not set up for rollback.\nThe ambit can be specified manually using the --ambit option.\nTraceback (most recent call last):\n File "/usr/share/plinth/actions/snapshot", line 299, in <module>\n main()\n File "/usr/share/plinth/actions/snapshot", line 295, in main\n subcommand_method(arguments)\n File "/usr/share/plinth/actions/snapshot", line 286, in subcommand_rollback\n subprocess.run(command, check=True)\n File "/usr/lib/python3.9/subprocess.py", line 528, in run\n raise CalledProcessError(retcode, process.args,\nsubprocess.CalledProcessError: Command \'[\'snapper\', \'rollback\', \'--description\', \'created by rollback\', \'23567\']\' returned non-zero exit status 1.\n')
Jan 15 21:41:25 freedombox /usr/bin/plinth[512]: # help get-logs
Jan 15 21:41:25 freedombox sudo[2522]: plinth : PWD=/ ; USER=root ; COMMAND=/usr/share/plinth/actions/help get-logs
Jan 15 21:41:25 freedombox sudo[2522]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
The snapshot I want to restore is listed as number 23567, and is listed in sudo btrfs subvolume list / as
ID 23933 gen 3883200 top level 271 path .snapshots/23567/snapshot
full output of this command
ID 271 gen 4154718 top level 5 path .snapshots
ID 14114 gen 3218784 top level 271 path .snapshots/13752/snapshot
ID 14294 gen 3230888 top level 271 path .snapshots/13932/snapshot
ID 14663 gen 3254806 top level 271 path .snapshots/14300/snapshot
ID 15952 gen 3352849 top level 271 path .snapshots/15589/snapshot
ID 17161 gen 3435474 top level 271 path .snapshots/16798/snapshot
ID 17709 gen 3476832 top level 271 path .snapshots/17345/snapshot
ID 18857 gen 3552154 top level 271 path .snapshots/18492/snapshot
ID 20226 gen 3642652 top level 271 path .snapshots/19861/snapshot
ID 21142 gen 3701733 top level 271 path .snapshots/20777/snapshot
ID 23088 gen 3827833 top level 271 path .snapshots/22723/snapshot
ID 23854 gen 3878057 top level 271 path .snapshots/23488/snapshot
ID 23904 gen 3881345 top level 271 path .snapshots/23538/snapshot
ID 23933 gen 3883200 top level 271 path .snapshots/23567/snapshot
ID 24213 gen 3901200 top level 271 path .snapshots/23847/snapshot
ID 24406 gen 3913707 top level 271 path .snapshots/24040/snapshot
ID 24432 gen 3915215 top level 271 path .snapshots/24066/snapshot
ID 24488 gen 3920093 top level 271 path .snapshots/24107/snapshot
ID 24493 gen 3920783 top level 271 path .snapshots/24110/snapshot
ID 24504 gen 3921656 top level 271 path .snapshots/24121/snapshot
ID 24505 gen 3921662 top level 271 path .snapshots/24122/snapshot
ID 24516 gen 3922729 top level 271 path .snapshots/24133/snapshot
ID 24517 gen 3922760 top level 271 path .snapshots/24134/snapshot
ID 24518 gen 3922810 top level 271 path .snapshots/24135/snapshot
ID 24519 gen 3922852 top level 271 path .snapshots/24136/snapshot
ID 24520 gen 3922858 top level 271 path .snapshots/24137/snapshot
ID 24523 gen 3923096 top level 271 path .snapshots/24140/snapshot
ID 24526 gen 3923309 top level 271 path .snapshots/24143/snapshot
ID 24528 gen 3923444 top level 271 path .snapshots/24145/snapshot
ID 24529 gen 3923455 top level 271 path .snapshots/24146/snapshot
ID 24530 gen 3923492 top level 271 path .snapshots/24147/snapshot
ID 24563 gen 3926135 top level 271 path .snapshots/24180/snapshot
ID 24576 gen 3927070 top level 271 path .snapshots/24193/snapshot
ID 24578 gen 3927096 top level 271 path .snapshots/24195/snapshot
ID 24767 gen 3940802 top level 271 path .snapshots/24383/snapshot
ID 25183 gen 3970838 top level 271 path .snapshots/24799/snapshot
ID 25426 gen 3987322 top level 271 path .snapshots/25042/snapshot
ID 25475 gen 3990762 top level 271 path .snapshots/25091/snapshot
ID 25891 gen 4020825 top level 271 path .snapshots/25507/snapshot
ID 25908 gen 4021875 top level 271 path .snapshots/25524/snapshot
ID 26456 gen 4062458 top level 271 path .snapshots/26071/snapshot
ID 26535 gen 4068045 top level 271 path .snapshots/26150/snapshot
ID 26604 gen 4072958 top level 271 path .snapshots/26219/snapshot
ID 26616 gen 4073735 top level 271 path .snapshots/26231/snapshot
ID 26962 gen 4098911 top level 271 path .snapshots/26577/snapshot
ID 26965 gen 4099069 top level 271 path .snapshots/26580/snapshot
ID 27063 gen 4106069 top level 271 path .snapshots/26678/snapshot
ID 27364 gen 4128598 top level 271 path .snapshots/26978/snapshot
ID 27412 gen 4132202 top level 271 path .snapshots/27026/snapshot
ID 27580 gen 4144739 top level 271 path .snapshots/27194/snapshot
ID 27604 gen 4146529 top level 271 path .snapshots/27218/snapshot
ID 27628 gen 4148315 top level 271 path .snapshots/27242/snapshot
ID 27652 gen 4150051 top level 271 path .snapshots/27266/snapshot
ID 27661 gen 4150682 top level 271 path .snapshots/27275/snapshot
ID 27662 gen 4150708 top level 271 path .snapshots/27276/snapshot
ID 27672 gen 4151372 top level 271 path .snapshots/27286/snapshot
ID 27674 gen 4151444 top level 271 path .snapshots/27288/snapshot
ID 27676 gen 4151478 top level 271 path .snapshots/27290/snapshot
ID 27690 gen 4152572 top level 271 path .snapshots/27304/snapshot
ID 27691 gen 4152644 top level 271 path .snapshots/27305/snapshot
ID 27692 gen 4152719 top level 271 path .snapshots/27306/snapshot
ID 27693 gen 4152831 top level 271 path .snapshots/27307/snapshot
ID 27694 gen 4152911 top level 271 path .snapshots/27308/snapshot
ID 27695 gen 4154699 top level 271 path .snapshots/27309/snapshot
ID 27696 gen 4154716 top level 271 path .snapshots/27310/snapshot
jondo@freedombox:~$ sudo ls /.snapshots/23567/snapshot
bin boot dev etc home initrd.img initrd.img.old lib media mnt opt proc root run sbin srv sys tmp usr var vmlinuz vmlinuz.old
Instead of restoring an old snapshot with working LAN, I have now (after taking a dd-image) triggered a manual update to see where this leads. It ended with a shutdown at 2:00 in the following night. After power-cycling the box to boot again, the ssh session showed me that the update changed the kernel from 5.10.0-8-armmp-lpae to 5.10.0-10-armmp-lpae.
However, the upgrades page still said āYour Freedombox needs an update!ā (and eth0 still didnāt work). So I triggered another manual update. Now the upgrades page shows the following unattended-upgrades.log:
2022-01-16 16:17:34,509 INFO Starting unattended upgrades script
2022-01-16 16:17:34,517 INFO Allowed origins are: origin=Debian,codename=bullseye,label=Debian, origin=Debian,codename=bullseye,label=Debian-Security, origin=Debian,codename=bullseye-security,label=Debian-Security, o=Debian Backports,a=bullseye-backports,l=Debian Backports
2022-01-16 16:17:34,521 INFO Initial blacklist:
2022-01-16 16:17:34,526 INFO Initial whitelist (not strict):
2022-01-16 16:18:34,669 WARNING Package tt-rss has conffile prompt and needs to be upgraded manually
2022-01-16 16:18:37,165 INFO package tt-rss not upgraded
2022-01-16 16:18:39,391 INFO Removing unused kernel packages: linux-image-4.19.0-17-armmp-lpae
2022-01-16 16:20:45,729 INFO Packages that were successfully auto-removed: linux-image-4.19.0-17-armmp-lpae
2022-01-16 16:20:45,740 INFO Packages that are kept back:
2022-01-16 16:20:50,846 INFO No packages found that can be upgraded unattended and no pending auto-removals
2022-01-16 16:20:51,269 INFO Package freedombox is kept back because a related package is kept back or due to local apt_preferences(5).
2022-01-16 16:20:51,503 INFO Package guile-2.2-libs is kept back because a related package is kept back or due to local apt_preferences(5).
2022-01-16 16:20:54,099 INFO Package sshfs is kept back because a related package is kept back or due to local apt_preferences(5).
2022-01-16 16:20:54,206 INFO Package tt-rss is blacklisted.
2022-01-16 16:20:54,342 INFO Package zile is kept back because a related package is kept back or due to local apt_preferences(5).
Should I now try to find out why the upgrade of the package freedombox is kept back? (Maybe itās because I have turned off āEnable auto-update to next stable releaseā?) Or should I stop trying to move forward with upgrades, and try to migrate my data and settings to the current weekly Pioneer image? I have already tested (with the image from 2022-01-07) that eth0 works there (although the port lights still both only blink).
Other folks also encountered these packages being held back while upgrading from buster to bullseye. You might want to take a look at this thread, this thread, and this issue.
I have now fixed my network problem by migrating my Radicale data to the new Pioneer image!
For this, I first recreated all users, then I installed Radicale, then I restored my last Radicale data backup. This was a restore from Bullseye to Bullseye, so I fortunately saw no issues like the ones in this Bullseye restore of Buster data.