Who Signed My Cert

Every now and then you have to open up an SSL cert to inspect its content or verify information contained within. This is something that I had to do fairly often in a past life but is now firmly in the ‘reference a search engine’ territory.

This Red Hat article is not only a good overview of CA and Server certificates and their functions but also includes examples of inspecting x509 certificates.

source: Who signed my cert

openssl x509 -noout -text -in www.redhat.com.crt

There is another article detailing how to create podman secrets. It includes a useful command to generate a self-signed cert that you can use for internal testing.

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:4096 -keyout certificate.key -out certificate.pem

Deploying Gitea with Podman and Docker Compose

I’ve been running a self-hosted instances of GitLab for a few months. In general I’m happy with GitLab but it’s fairly resource intenseive for my usage so I decided try Gitea as a lightweight alternative.

This covers setting up Docker Compose to run with rootless Podman on a local machine so HTTPs, security, and other settings are out of scope for this post.

Sources

I used the below sources to install the test instance.

Running Docker Compose with Rootless Podman

Installation with Docker (rootless)

Install and configure Podman and Docker Compose

sudo dnf install -y podman podman-docker docker-compose
systemctl --user enable podman.socket
systemctl --user start podman.socket
systemctl --user status podman.socket
export DOCKER_HOST=unix:///run/user/$UID/podman/podman.sock
echo 'export DOCKER_HOST=unix:///run/user/$UID/podman/podman.sock' >> $HOME/.bash_profile

Configure Gitea docker-compose.yaml

I’m going to deploy with Gitea using MySQL as the backing database. All the volumes will be local volumes so we don’t have to worry about permissions. Make sure you change the example credentials for both MySQL and Gitea.

First create a folder for your compose file and cd into that directory

mkdir -p gitea
cd gitea
touch docker-compose.yaml

Copy the below example docker-compose.yaml into the docker-compose.yaml file.

#docker-compose.yaml
version: "2"

volumes:
  gitea-data:
    driver: local
  gitea-config:
    driver: local
  mysql-data:
    driver: local

services:
  server:
    image: docker.io/gitea/gitea:1.20.1-rootless
    environment:
      - GITEA__database__DB_TYPE=mysql
      - GITEA__database__HOST=db:3306
      - GITEA__database__NAME=gitea
      - GITEA__database__USER=gitea
      - GITEA__database__PASSWD=gitea
    restart: always
    volumes:
      - gitea-data:/var/lib/gitea
      - gitea-config:/etc/gitea
    ports:
      - "3000:3000"
      - "2222:2222"
    depends_on:
      - db

  db:
    image: docker.io/mysql:8
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD=gitea
      - MYSQL_USER=gitea
      - MYSQL_PASSWORD=gitea
      - MYSQL_DATABASE=gitea
    volumes:
      - mysql-data:/var/lib/mysql

After you have created the docker-compose.yaml file and modified it to your liking simple run docker-compose up to start the Gitea instance. After a few seconds the instances will be ready to configure. If you’ve kept the default ports in place simply nagivate to http://localhost:3000 to finalize the Gitea installation. You should see a screen similar to the below image.

You can accept the defaults or modify the configruations as necessary. Once you are satisfied with the instance you can click one of the two Install Gitea buttons at the bottom of the page.

Once the server has finished the installation process your browser will refresh to the login page.

Click the Need an account? Register now. link and create the first user. This user will be the admin user.

Login as this user to perform any additional server setup and create users.

Tear Down

Once you are finished with the test instances you can shut it down by running

docker-compose down

this will stop the containers but preserve the volumes created in the docker-compose.yaml file. If you want to delete the volumes as well you can either delete them using the podman cli or by running docker-compose down --volumes.

I Did a Game Jam

Ok. So I did the game jam and the results are … well not that great but I believe the ratings are inline with what was delivered. Now that the Jam is over I’d like to take some time to reflect on my choices made during the Gam Jam.

The Experience

This was a great learning experience and the provided feedback is a good starting point for improving the project going forward. Most of the feedback was for things that I wanted to do but just ran out of time.

The fact that I even published something to Itch and participated in the jam is a morale boost and it feels like a huge blocker has been removed.

It was also impressive to see what other people were able to do in the same time (and in some cases much less time).

The Engine

I decided to use Bevy, a game engine written in Rust, for this entry. This was mainly because I was learning Rust for another opportunity and Bevy seems to be the strongest choice for the language.

I could have made more progress and delivered a better game with Unity due to prior experience and the much better documentation and ecosystem available for the engine. But Unity is a heavy choice. You have to create an account, download Unity Hub, install a decent IDE, and write in C#. All of which I didn’t want to do for this Game Jam.

Godot was also an option and one that I seriously considered but then I would have had to learn their specific language. Rust is available but I’m not sure if one should learn Godot with Rust. This is definitely an engine that I want to learn and use. And likely the engine I would have pick if I were making a real game and could allocate the time to learning it properly.

The choice to go with Bevy definitely had some negative consequences for me. Theses consequences were mainly negative because of the time constraints of the Game Jam.

First of all Bevy is still a very much in-development engine. There are gaps in the documentation that is expected to be filled by looking at the examples. This is great if there is an example for your specific problem but if not there may not be documentation to fall back on. The Unofficial Bevy Cheat Book is an excellent resource that also helps fill in the documentation gaps.

Bevy itself is implemented as various plug-ins. Almost all components of the engine can be enabled, disabled, swapped out, or customized as needed. This is extremly powerful but can make it difficult to find where a change needs to be made.

Bevy to my knowledge does not include a 3d collision or physics library. The 3rd party library, Rapier, fills this gap but I did not have time to learn it for this Game Jam. The current build just treats all entities as points on a 2D-plane.

The Concept

I wanted to build a game around a dashing mechanic. The character would be free to walk on the X and Z axis but would not be able to move along the Y axis (no jumping). The character’s main attack would be to dash forward and knock enemies back. This attack could be charged to do more damage. Stronger enemies requiring the strong attack to push them back.

The enemies themselves would do damage to the player character when they or their projectiles touched the character. The player would take damage as long as they were in contact with the enemies. The player’s knock back attack would both move the enemy away from the player (break contact) and do damage to the enemy.

Random power ups would appear on the screen that would provide shields, restore energy, and etc.

Energy would be the main resource for the game. Energy would constantly drain during the session but could be restored by collecting power ups. Once all of the energy was drained the game would be over. The basic concept is that you are in a hostile environment and energy was needed to keep you functioning (repair shields or armor, maintain life support, etc)

The main loop for the game was that the player was trying to stay alive for as long as possible by avoiding or knocking the enemy back.

Avoidance would have been the main strategy due to the contact damage. Players would have to determine if an attack was worth it not while also managing the number of enemies they allowed to exist. Dashing into a large group of enemies would be problematic due to the contact damage from each enemy. The player would have to effectively utilize power up and dashing to get the best surivival time.

The Compromises

This wasn’t a very complicated concept and one that I thought that I could finish in the alloted time. The main risk in this project was using Rust, my newest language, and Bevy, an engine that I had no serious experience with, instead of Unity or even Godot. However life found a way to interfere with my plans and unfortunately this did force me to drastically reduce the scope of what I was working on.

It may surprise you but “Pumpkins are the only hope” was not the first title for this project. Late into the jam I ran into a problem with the original assets that I was going to use. I needed a new set of coherent assets and I ended up picking a few assets from Kenney Game Assets and muttering “These pumpkins are the only hope of finishing”.

The Results

What I ended up with is what I would label a prototype or a proof-of-concept that shows that I can use Bevy to create a game. It’s going to take more work and effort to learn the engine and its ecosystem of plug-ins but it is possible. It wasn’t a full game nor what I envisioned when I started but I’m glad that I was able to submit something for this Game Jam.

What’s next

  • take time to build a prototype that is more in-line with the original vision.
  • provide an in-browser version. Bevy supports this but I ran into an issue where sound wouldn’t play. The game needs to be launched from a user action but I didn’t know how to do this at the time.
  • prototype the game in Godot.
  • Look into AppImage or FlatPak. I didn’t want to distribute as an executable in a gzip archive but I ran out of time.

Favorite Entries

Cosmic Courier Cameron

Spacemail Chimp+

I played both of these games far longer than I probably should have. But time well spent.

Linux Gam Jam 2023

Linux Game Jam 2023 has just started and I’m going to do my best to publish an entry this year. I’ve entered many game jams in the past and life has always found a way to prevent me from completing an entry.

I’m going to put this out into the universe.

This year, I’m going to finish and submit a simple game.

Linux Game Jam does not have a theme so my options are wide open. I already have a base concept that I think will provide some flexibility and , if time allows, extensibility.

Wish me luck!

DNF Automatic Ruined My Day

We have a long running pipeline that has worked without issue for years on CentOS 7 however after a recent upgrade to Rocky 9 we began noticing that the pipeline was starting to fail. The application being tested would start with no issue at the begining of the test run but as time passed the application would fail to start due to not being able to find a valid JAVA_HOME environment variable.

This was odd for a few reasons

  • The test nodes are created from a validated image template with dependencies installed
  • JAVA_HOME is set and confirmed valid at the start of the run
  • The application was able to start at the beginning of the run
  • The test nor the pipeline modify the Java install

Well it turns out that base image used to create the test image had dnf-automatic installed and enabled.

dnf-automatic itself is an alternative way to invoke dnf-upgrade but is often configured to run with cron or systemd timers. It was this systemd timer that was waking up and updating Java during the test run. This in turn caused the JAVA_HOME environment variable to become invalid during the test run.

This issue did highlight a few issues with the test pipeline.

  • dnf-automatic is problematic for shorted-lived test nodes. The whole point of the test image is to ensure that your test are running on a well know and consistent test environment. dnf-automatic invalidates this by modifying the system.

  • The JAVA_HOME environment variable was too specific. The path used to discover the Java root folder was based on dirname $(readlink -f $(which java)). On my Fedora system this results in the path /usr/lib/jvm/java-17-openjdk-17.0.6.0.10-1.fc38.x86_64/bin/java. If I use this path to determine JAVA_HOME then it will be invalid if Java is updated after setting JAVA_HOME. RHEL-like distros provide a number of symlinks to allow multiple versions of Java to be installed. Depending on the scenario it may be more advisable to use one of the more generic links in /usr/lib/jvm

  • The application itself did not log the value of JAVA_HOME in the monitored log files which hid the fact that the value had changed over time.

In general dnf-automatic is a very useful tool and something that I would definitely install and configure to install security updates on long-running servers. It should not have been installed on the short-lived test nodes which see frequent manual updates to put them in a known-good state.

Adding a Process to a cgroup (version 2)

Add the process id to the desired cgroup’s cgroup.procs file.


# create cgroup
sudo mkdir /sys/fs/cgroup/testgroup
# enable cpu interface
echo "+cpu" | sudo tee -a /sys/fs/cgroup/cgroup.subtree_control 
# enable cpuset interface
echo "+cpuset" | sudo tee -a /sys/fs/cgroup/cgroup.subtree_control 
# add current process to cgroup
echo "$$" | sudo tee -a /sys/fs/cgroup/testgroup/cgroup.procs 
Read more →

How to determine available resources in a container

Linux distributes system resources using control groups (cgroup) which the kernel.org defines as

cgroup is a mechanism to organize processes hierarchically and distribute system resources along the hierarchy in a controlled and configurable manner. [1]

There are currently two versions of cgroup in use today and due to historical and compatibility reasons version 1 and version 2 of cgroups can coexist as long as there are no overlaps between which controllers are being managed.

The main, but not only, difference between cgroup version 1 and version 2 is that version 1 has a mount for each controller while version 2 unifies all the active controllers under a single mount point. This is obviously a very simplified explanation. Please see the official kernel documentation for more details on the two versions [2][3].

They typical mount point for both versions of cgroup is /sys/fs/cgroup. This is not a hard requirement and can be different depending on the distro. For instance Alpine Linux with OpenRC in hybrid mode will mount cgroup version 1 at /sys/fs/cgroup and version 2 at /sys/fs/cgroup/unified.

This table will assume that the cgroup root path is /sys/fs/cgroup and will use relative paths based on that. Adjust the relative paths based on your environment.

resource cgroup v1 cgroup v2
available memory ./memory/memory.limit_in_bytes ./memory.max
assigned cpu cores ./cpuset/cpuset.effective_cpus ./cpuset.effective.cpus
cpu bandwidth ./cpu/cpu.cfs_quota_us ./cpu.max
cpu period ./cpu/cpu.cfs_period_us ./cpu.max

Refer to the documentation for the format for each file.

To determine which cgroup your application belongs to you can reference /proc/self/cgroup which will list all the cgroup that your process belongs to. Depending on the environment you may also need to map your cgroup to the actual mount point by inspecting /proc/self/mountinfo. This is especially true if running inside a container where the cgroup may refer to the path on the host and may not be accurate when viewed inside the container.

Asking the Right Question

How many cores are on my system?

This is a seemingly simple question. We’d like to use the number of available processors to determine how many jobs we can safely run in parallel. Linux provides various ways to fetch the number of available processors including:

  • nproc
  • lscpu
  • /proc/cpuinfo

> nproc 
32

> lscpu | grep ^CPU\(s\)\: | awk '{print $2}'
32

> grep --count "processor" /proc/cpuinfo
32
Read more →