Super Bionic Bash is the most recent development in the games catalog at Limbitless Solutions, all of which provide engaging ways to train Limbitless' Bionic Kids in the use of their new bionic arms. Super Bionic Bash is a 4-player party game that cycles through board game rounds and minigames. These various minigames use electromyography (EMG) and an inertial measurement unit (IMU) to provide unique methods of input to the game.
Super Bionic Bash is an Epic MegaGrants recipient.
As I joined the project while it was still very early in its development, I was tasked with building out much of the game's core infrastructure. The goal was to make the project open to other developers to contribute to. Other programmers or even entire other teams should easily be able to go in and use pre-made building blocks to construct a new minigame.
To further this goal, I constructed a C++ ecosystem for the minigame system that provides implements common minigame functionality while remaining configurable and extensible for future developers (C++) and designers (Blueprints). For example, this system automatically handles player spawning, team arrangement, splitscreen, practice mode, and integration with the rest of the game.
Minigame developers can fully configure their minigame from the properties on their game's gamemode. Other parts of the system, like the minigame player and point counter component, can be extended through inheritance or composition to fit the needs of their specific game.
This minigame system is packaged nicely into an Unreal plugin, which is internally called MinigameCore. This is a good way to manage dependencies in large projects such as Bash, but italso has the benefit of allowing other developers to create their minigame in separate Unreal projects. This prevents the need to clone and work with a needlessly large Unreal project, keeping setup and iteration quick and easy.
I also created extensive internal documentation to help other developers be effective when using MinigameCore. This documentation includes steps on how to setup and use MinigameCore in external minigame projects, how to migrate minigames to the main Super Bionic Bash project, and full details on the functionality and public API of the many parts of the plugin.
This is an example of one such delegate that is being used to drive UI in Blueprints for designer use.
As mentioned before, the flex controllers we use at Limbitless Solutions are equipped with motion sensors. These sensors measure the proper acceleration and the angular velocity of the device. However, no prior game at the organization has used these before and there was very little documentation or previous code to make use of the motion sensors. My supervisors tasked me to simply "make motion controls work" and left everything else up to me.
With the problem being rather vague, the first thing I did was do some research. I looked into how motion sensors worked and the physics behind them to get a better understanding of what the sensor readings meant. For example, the accelerometer measures proper acceleration, which is acceleration from an inertial observer. This means that, when the device is at rest on a table, it will read acceleration due to the normal force of the table pushing up! This was rather surprising to me at first.
I also researched a lot about how a gyroscope works, its limitations, and how sensor fusion can be used to sense the environment with more accuracy than one sensor on its own.
I also did a deep dive into the mathematics related to how computers describe rotations. I delved into everything from euler angles, rotation matrices, and quaternions. Ultimately, I decided that quaternions would be the best option to describe the orientation of the motion sensor. Quaternions solved a lot of problems that plagued euler angles, seemed easier to work with than rotation matrices, and were already common in the Unreal ecosystem.
Quaternion math was always a topic that I thought was unapproachable and scary. I've always heard them described as something that's helpful but too esoteric for mere mortals. I soon found out that they really aren't bad, though. I learned that quaternions are essentially just 4 numbers that describe an angle and an axis. They can be composed together via right multiplication and can be used to rotate vectors using a process called conjugation.
My math exploration led me to deriving a formula which calculates the power of a quaternion, effectively scaling the rotation described by it. This turns out to be very similar to slerp, or spherical interpolation, but it was still fun to derive and implement a similar solution.
With a full understanding of the physics and math behind rotations, I was now fully equipped to implement motion controls.
The first step is to process the raw data received by the IMU, which are unsigned 8 bit integers. This requires some casting and scaling to get the data into the right format, which is gforce for acceleration and degrees per second for angular velocity. Some sign changes are also required to change from the IMU's right-handed coordinate system to Unreal's left-handed coordinate system.
Then, the angular velocity is integrated to approximate the angular displacement travelled in the last frame. This is applied to the current rotation of the device. This method of calculating orientation has a lot of error, however, and desyncs after just a few seconds of rotation.
This is where sensor fusion saves the day. The orientation is also calculated from the gravity vector as read by the accelerometer, which is reliable over the long-term but not the short-term. Applying a small amount of this rotation will ensure the calculated orientation stays accurate over long periods of time while preventing the accelerometer from overpowering gyroscope in short-term bursts of rotation.
The result of this system is a robust and (mostly, see below) accurate way of measuring the current orientation of the device in space. This already opens up the way for many interesting gameplay mechanics, such as using the flex controller to aim a cursor on the screen. However, the orientation of the device was just one piece of the puzzle - knowing how the device is moving through space was the next.
Due to the rate packets are sent out from the device via bluetooth, it is infeasible to dead reckon the position or velocity of the device from just the accelerometer data. However, with the calculated orientation previously obtained, it is possible to negate the effects of gravity read by accelerometer to calculate the acceleration of device from the perspective of the player. Essentially, this is the acceleration only due to the player's movements, not Earth's gravity. This can be used to detect motions like jumps, punches, and other sudden actions, providing even more possibilities for minigame mechanics.
I also made an editor utility widget that displays the device's recent data over time, which was instrumental in debugging and improving the system.
There are still some limitations with this system. Since the device is not equipped with a magnetometer, it will drift along the global Z-axis over time. Orientation along the Z-axis cannot be measured from accelerometer's gravity reading like is possible with the X- and Y-axes. However, integrating the angular velocity read from the gyroscope is good enough for short-term use. For long-term accuracy, a magnetometer would be needed to find the device's bearings from Earth's magnetic field.
One solution to this problem is to have the player constantly reset by having them hold the device in a certain orientation (e.g. have them hold it straight in front of them and call that the new "zero"). Many games that rely on motion controls do this, but often do this in ways that are well-hidden and don't seem obvious to the player as a recalibration.