Here are some annotations which websites are available. Who is responsible for the administration and how to improve the websites.
Websites
https://lumiera.org https://www.lumiera.org
The main Website.
-
Data is in /var/www. The Website is a git repository.
-
Configuration is at /etc/apache2/sites-available/lumiera.conf
-
cehteh and ichthyo are responsible for the administration.
-
To proprose some changes clone the git repository and send a notfication/merge-request to the mailinglist.
Some directories symlink to other directories (see below).
The »staging site«
Testing repository for the Website. It is configured in Apache to be served from a separate virtual host, with an obvious DNS name (which must not be mentioned in published content!). Web development is done and reviewed here before it gets pushed to the main Website.
-
Data is in /var/www-staging. The Website is a git repository.
-
Configuration is at /etc/apache2/sites-available/staging.conf
-
cehteh and ichthyo are responsible for the administration.
-
To proprose some changes clone the git repository and send a notfication/merge-request to the mailinglist.
https://git.lumiera.org
The gitweb interface to all public git repositories hosted on the server.
-
Configuration is at /etc/apache2/sites-available/gitweb.conf
-
cehteh (and sometimes ichthyo) cares for the administration.
https://issues.lumiera.org
Trac issue tracker.
-
Configuration is at /etc/apache2/sites-available/trac.conf
-
ichthyo cares for the administration.
https://static.lumiera.org
Hosting big static files (media, videos, pictures)
-
Data is in /var/local/www_static. The Website is a git-annex repository.
-
Configuration is at /etc/apache2/sites-available/static.conf
-
cehteh cares for the administration.
-
we have a sample_archive for some test-media/video files there. People can contribute more media, ask on the mailinglist for details.
-
Data will be placed there on-demand. Don’t forget to git annex add the files.
Directories
Some noteable directories.
/git /var/local/git/
Directories and symlinks to public git repositories served by the git daemon and gitweb.
/var/local/tiddly/
Git repository with the tiddlywiki used for some work notes during development.
/var/local/www_debian/
Debian package pool maintained by ichthyo.
/var/local/doxy/
API-Documentation generated by doxygen.
/var/local/gitmob/
Public writable git repositories for receiving small and easy contributions.
/var/local/rsyncd_incoming/
Upload directory where people can rsync data.
/var/local/www_documentation/
Clone of the lumiera source repository to serve the documentation within.
/var/local/trac/
This is the configuration root of our Trac installation for the Lumiera issue tracker. This is actually a Git repository, since our setup became gradually more elaborate over time. We even have some local tweaks to the CSS styles and the Subtickets-Plugin (Python code).
-
a copy of the Apache configuration is checked in as well
-
note some scripts in the scripts subdirectory
-
an export of current Trac resources was dumped into static and is served directly by Apache from there
-
the actual »Trac-environment« with config and SQLite-DB is in trac-env
-
all source trees of all essential plug-ins are checked into PluginBuild, so that any plugin could be regenerated by
python3 setup.py bdist_egg -
a dump of the TracDB is also checked into that Git
Setup and Configuration
-
Apache is configured with the ‘worker’-MPM
-
Trac runs under WSGI configured with a pool of multithread daemon processes
Load and Calibration
As of 1/2026, we employ a simplistic load monitoring based on evaluating the running processes and some server statistics. This is implemented as a bash script, checked in as /var/local/trac/scripts/collect-server-stats and configured to run every two minutes (via SystemD timer). The resulting data trail can be found in /var/log/apache2/loadwatch.csv, with the following columns:
-
Timestamp: ISO UTC min
-
Sum webserver…
-
Processes
-
Threads
-
overall resident memory (KiB)
-
factor of overall memory resident
-
-
Apache
-
workload factor
-
Processes
-
Threads
-
resident memory
-
-
WSGI-Trac
-
workload factor
-
Processes
-
Threads
-
resident memory
-
-
Server-statistics
-
∅ requests per second
-
∅ request duration in ms
-
currently active slots
-
currently idle workers
-
scoreboard of slot allocation
-
The parameters of the Apache MPM and the WSGI daemon are then tuned such as to make good use of the available memory, while avoiding excess idle workers.
Anubis
Anubis is a service that was discussed controversially in the community this year. It was created by a Canadian Open-Source programmer as an immediate stopgap to help protect the »small internet« from the endless storm of requests that flood in from AI companies. It is configured as a gateway proxy, which classifies incoming requests based on request properties and sends suspicious clients first to a JavaScript proof-of-work challenge. After passing Anubis, the client gets a Cookie to prevent further challenges for some limited time.
Anubis was reported to be very effective for keeping aggressive scraper bots away, yet the bots have adapted already to some degree. So this remains a game of cat and mouse. Our first evaluations however indicate very promising numbers, by keeping out 92% of PageImpressions and UniqueUsers, thereby reducing the traffic deliverey by the Lumiera issue tracker by more than 80%.
Introducing such an entrance gateway may be seen as an extreme measure; the rationale is that we have reached such a level of threat that generic rate limits would have to be so severe that they practically render those resources useless. Without these protective measures, we would no longer be able to provide Lumiera.org as a free resource to everyone.
Legal stance: I consider this usage covered by the provisions of GDPR. Setting this cookie is justified by our legitimate interest of keeping the service operational; we evaluate IP data patterns for the sole purpose of detecting abuse of our website, and we state that fact clearly on our GDPR page.
Setup
Anubis runs as single-instance SystemD service unit anubis@lumiera, with transient
storage backend only. It is integrated into the Website as a gateway (reverse proxy).
Anubis is configured by a set of rules in Common Excpression Language.
The active rule set is configured in /etc/anubis and also checked into the
Trac-Git. Rules are evaluated according to the commited-choice
pattern, i.e. the first rule where the guard condition matches will terminate the
evaluation and yield a decision. Anubis was installed from a custom DEB package
and ships with
a set of rule building blocks,
that can be imported from the (data) prefix — which translates into the path
/usr/share/doc/anubis/data
As of 1/2026, Anubis protects our Issue tracker, the Git repository browser and the API-doc — which together constitute the majority of pages accessible on the website.