[FB 20.12][Solved] Plinth fails to start due to new frontpage.py shortcuts and filesystem permissions

Sorry folks, this is a serous regression. Unfortunately, it didn’t surface in our automated tests or manual ones and slipped by. I will prepare a fix which should become available soon and flow in automatically.


Hey Sunil,

Thanks for taking care. These things happen, so do not worry too much. After all, it was comparably easy to spot and fix. I am super happy with my FB and the experience of running, using and administrating it. You’re doing a great job!



The same happened for me. I am hoping for the update. My freedom pionier-edition updates automaticaly, so i think in a short time the problem will be fixed.

Probably all boxes have downloaded the auto-update by now, but would it be possible to remove a breaking update from the servers?

So far my Freedombox Pioneer remains broken as even an ‘apt update’ only brought in an updated ca-certs package several minutes ago. I will check the work-around.


Are the automated tests only based on debian testing, or also stable + proposed backports?

Thanks for your support and understanding. I just posted a fixed https://salsa.debian.org/freedombox-team/freedombox/-/merge_requests/1854. If this gets released today, unstable users will get it immediately, testing users 2 days after and stable users (via backports) a day or so after that.

Since this comes from Debian repositories, I don’t know if there is a way to revert to the old version. If it is allowed, apt won’t be able to downgrade automatically with additional configuration. Further, there needs to be testing to ensure that old version doesn’t bailout on the changes/upgrades done by newer version. As a general practice, it is preferable to undo the changes and release a newer version than to rollback to an older version.

In this case, the fix is simple. Fixing the issue and testing it is better.

Currently, the functional tests are based on unstable only. They were somewhat flaky but understand some major improvements. We were hoping to run them in different situations.

Irrespective of the official infrastructure, members of the community can also help by running the functional tests in various scenarios such as Debian derivatives like Ubuntu, and Raspbian and on situations such as upgraded machines.

1 Like

I see, downgrading or a snapshot roll-back would already seem like a temporary quick-fix of the problem.

What I had in mind originally was just whether Debian has a way to remove or block a broken upgrade that got released unfortunately. That might prevent the further propagation of the problem by blocking further installations by manually upgrading users, or auto-upgrades in timezones that have not downloaded the package yet. (So they can later update directly to a fixed follow-up version without any breakage.)

Very good idea, would you have a link to a guide or docs in https://wiki.debian.org/FreedomBox/Contribute

And maybe let us know, but I guess it could also help if users can consider to do some regular donations: https://freedomboxfoundation.org/donate/

Thank you, would a directly downloaded unstable package also be installable with dpkg -i in this case (link to python package)?

1 Like

To me, this sounds like Debian CI / autopkgtest: https://ci.debian.net/packages/p/plinth/
The tests are defined in debian/tests:

If a package in unstable (freedombox or one of its dependencies) causes the tests to fail in testing, then that package is blocked from migrating to testing (and therefore not eligible for backports).

I would like to add some more tests to cover the core functionality. It wouldn’t work for apps though, because they are optional and don’t have a dependency relation to freedombox package. (It would not have caught this issue either, because it depends on a particular filesystem state.)

Those sound like good preventive measures.

Seems like what I meant is called manual “removal of packages” that are confirmed to have a bug that breaks earlier installations.

Another idea would be to allow users to schedule auto-updating of non-security backports. For example, “no-wait”, 1, 2 ,3 , or 5 days, 1, or 2 weeks.

Or just a random value between 0-48 hours, by default, to stagger all auto downloads. Allowing to remove a package from the servers, if a breaking bug surfaced. (With a “no-wait” option to allow users ro help with their final update testing and verification. And a “disable” option to temporarily disable backports-updates during times when utmost stability is desired.)

A default delay for non-security backports would probably provide the easiest way for regular users to do their own testing, and to help the community with testing. (While still being able to rely on automatic updates, during periods of lower priority for the server.) Just by running a backup of their server, e.g. in a virtual machine, with the delay disabled.

The delay mechanism would just have to make sure that all security updates, as well as special bugfix package updates, are still installed as soon as possible.

Is there anybody here, which freedombox has come up after the last freedombox update has been installed.

I still cannot reach my freedombox running on the pionier-edition.

Please check again today after 06:30 in your configured time zone.

Now my freedombox-pionier-edition is working again. Thank you all.

1 Like

Not sure if the issue you are talking about is what I am having a problem with. We were on vacation and just got back July 17th I turn on TV and Freedom Box and saw there was an update ready for install…went through install and then restarted and now all I have is a “blue screen” I can get to settings and tried to do factory reset but I have nothing but blue screen. Can someone help?? I love my Freedom box

What do you mean by “blue screen”? Could you share a photo?
Also what hardware are you using?

I think automated testing beyond some basics becomes too inefficient pretty fast, and is simply (by logic) just never able to actually prevent the next bug…

Happen to stumble over an article that also suggest that’s not too far off, unfortunately.

So, the important thing seems to be to have an easy way for users interested in stability to test their own field-setup and to report bugs, and for maintainers to be able to withdraw a release again if a bug did slip though, in order to spare normal users (that only update stable-backports after a user field-testing delay) any known trouble (as well as the production machines of the users doing the testing, without their admins needing to be especially skilled to set up testing environments, pin packages etc. Installing and using with the delay disabled would already be enough to help with testing).

…some unforeseen interactions between the code for serialization and preparing a draft for editing. What test would have found this? The serialization code worked. The deserialization code worked. The code to prepare a draft worked. They even all worked together. Just not twice. Without apriori knowledge this could fail in this particular way, would I have tested it? Not likely…