CS 495 Physio Gaming


Project Overview

The goal of this project is to create a web based framework to utilize electrical signals from muscle groups (electromyography) as an engaging and novel input system for a video game. The applications of such a system go beyond pure entertainment, it has the potential to expand accessability for the physically impaired, and to assist in environments / situations that make traditional input methods less practical.

There are a few major components of the project:

Each of these is explained in detail later in this page.


Live Demo

Current value: 0


Project Setup

To begin, you will need to download Unity Hub. Download the most up-to-date version from Unity’s website here.

Next, you will need to download the Unity Editor. For our project, we used version 6000.2.7f2. If you cannot find where to download this in the Unity Hub, you can find it on Unity’s version archive here.

Once you have the correct Editor version, create a new project from Unity Hub. It should be of the type Universal 3D and have Windows and WebGL as build options (with WebGL being the more important of the two).

In order to get the FlyWorld project into your Unity project, you’ll need to go to Assets -> Import Package -> Custom Package and select the flyworld package in our Github. This will begin a process of importing all of the assets and scripts for FlyWorld into your project. It might take a while to load (allow up to 30 minutes).

The build process for the Unity project has a few steps. First, the build target for the project needs to be Web. Then click build and run. This will generate some build files and open the result in your default browser. Certain features of the JavaScript pipeline will only work in Microsoft Edge (or possibly any Chromium-based browser). The following JavaScript pipeline files can be found in the UnityToJavascriptExample repo (ask Dr. Crawford) and need to be added to the build folder.

The most important file generated by the build is the index.html file. This will need to be modified to incorporate the JavaScript files from the JavaScript sensor pipeline, primarily datastream.js. There is an indexModified.html file that you can copy and paste into the generated index file. If this file is lost, the instructions for its recreation can be found in the UnityToJavascriptExample GitHub repo, either by looking at the code or by following the demo video therein. Once the index file has been modified, refresh the webpage. Now the game is ready to play/connect the sensor.

Once the game is built and running, the sensor can be connected. This process is relatively straightforward; Select the connect button adjacent the WebGL canvas, and it should bring up a dialog that lists the Ganglion as an option (did you turn it on?). Once you select it values should start populating on the left side of the screen, and the sensor connection is complete.

The machine learning was mainly run out of a simple IDE, being VSCode in our case. A similar IDE will be necessary for running the machine learning pipeline. Our GitHub repo features a requirements.txt file that holds all the dependencies and versions used in our code. Once the repo is pulled, “pip install requirements.txt” must be run in order to make sure all the correct dependencies and versions are installed on the developer’s machine.


Features

Overview

FlyWorld has been expanded significantly from Dr. Crawford’s original product. We believe he may want the game to go in a different direction from what we have done with it, but we will list the features we have implemented here in case they can be repurposed for a different game.

Movement

Right now, there are three methods of movement. The character can walk using WASD. This offers a fixed camera movement system where the camera rotates with the player. The player’s walk speed can be modified in the right-hand menu when the player is selected in the Unity environment.

The second way the character can move is vertically, using LShift or the EMG device. This will activate the jetpack, which accelerates the character upward. While moving up, WASD can still be used to move the character. Using the jetpack consumes fuel and overheats the jetpack. Both of these values can be observed in the top right corner of the UI. Once the overheat bar is full or the fuel bar is empty, the jetpack is rendered unusable until there is more than 0 fuel or less than full overheat.

The third method of movement is the LUNGE feature, which is an alternate way for the jetpack to move. When hitting LCtrl, the jetpack flames turn blue, and pressing LShift or using the EMG device makes the player fly forward extremely fast. This effect is temporary, lasting 3 seconds, and has a 7-second cooldown before it can be used again.

Prefabs

We have added several prefabs that could be helpful in creating a level, regardless of whether the current form of the game is continued or changed to something else (to something like Temple Run). Currently, there are four relevant prefabs used in FlyWorld. To add a prefab to a scene, drag it from the assets menu at the bottom into the scene in the editor.

The BetterCoin, BetterFuel, and CooldownCrystals are all pickups that can be touched by the player to either add to their coin balance, refill their fuel, or cool down their jetpack. The items are destroyed on collision with the player, and the CoinBag and Jetpack objects associated with the Player object are updated to reflect the new value.

There is also a gas station prefab, Fuel Pump 3D, that was intended to be used as a checkpoint for the player. Currently, it will infinitely refill the player’s fuel while they are within a certain distance of the station.

Levels

We currently have one level as a proof of concept for FlyWorld. It is called TestLevel in the Scenes folder. It is a simple platforming level that showcases the different prefabs and features we have added to the game. We recommend exploring this level to get an understanding of the mechanics currently in place in Unity.

Much of what is present in FlyWorld uses free assets from the Unity store. We are not professional model creators, nor even amateur model creators, so the only original content visually is the HUD.

The EMG data is acquired via OpenBCI's Ganglion board. The sensor information is read and filtered by the Javascript Pipeline's datastream.js file. The filtered value is then stored as text in index.html and passed to Unity by the following code:

myGameInstance.SendMessage('JS_Hook', 'UpdateText', document.getElementById('filtered-sample').innerHTML)

The important part to note is that in the active Unity scene there needs to be an object named "JS_Hook" and it needs to have a script attatched to it with a function called "UpdateText". The exact names can be changed, but the object and function names need to match between Unity and the SendMessage call.

Overview

For this project, we used a 1D CNN to classify gestures. Our initial goal was to classify a fist squeeze vs. a hand at rest in order to control jetpack propulsion. We were able to achieve this goal with a relatively high level of accuracy. Other ideas we had that we weren’t able to complete were implementing a model that predicts rock vs. paper vs. scissors and implementing a model that controls a much more diverse set of movements.

Pipeline

The machine learning pipeline has three primary files: Main_CollectData.py, Main_Train.py, and Main_Predict.py. The data collection file collects data for each of the gestures trained on. The training file then uses the data and the 1D CNN to train an ML model for gesture classification. The prediction file loads the trained ML model and does live predictions so that model accuracy can be observed on the backend.

Filters

Two filters are implemented in an attempt to clean up the messy EMG signals: the notch filter and the highpass filter. A notch filter is a simple filter that is used to reduce “hum” that exists everywhere in the world, which is produced by a variety of electronics, including radio broadcasts and computers. The high-pass filter only cuts off wavelengths higher than a certain threshold; this was necessary due to limitations of the sampling rate of our OpenBCI device. Further research should be done into the effectiveness and necessity of these filters.

Neural Network

Our neural network uses a simple 1D convolutional design to learn patterns from sequential data. It starts with two convolutional blocks that extract increasingly detailed features, with features such as batch normalization, pooling, and dropout to help stabilize training and reduce overfitting. The second block ends with a global pooling layer, which condenses the learned features into a smaller set. After the convolutional blocks, the model uses a small, fully connected layer to interpret these features before passing them to a final softmax layer that outputs the prediction for the two classes. The architecture is lightweight but effective for a time-series classification task such as the one in our project.


Going Forward

The Unity project has a lot of room for expansion. One area is to fix the machine learning implemtation on the unity side. There is a bug in the MainPredict.cs script that causes it to generete enormous arrays and eventually crash the webGL canvas. The other area is to improve and expand the game aspect (ie physics, camera controls, levels etc...).

Overview

The machine learning side of this project has many aspects and parameters that can be modified. We will go over each file to dive into what variables can be changed, what variables need to be changed, and how that will affect the running of the code and training/prediction of the model.

Main_CollectData.py

The first variable that should be changed for every data collection session is “SAVE_FILE”. If this value is not changed at the beginning of a new training session to something unique, the old data will be overwritten.

The next four variables worth mentioning are “RECORD_DURATION”, “NUM_TRIALS”, “NUM_PEOPLE”, and “ACTIVE_CHANNELS”. “RECORD_DURATION” is how long a gesture will be recorded for. We stuck with 5 seconds over the course of the project. “NUM_TRIALS” determines the number of trials for each gesture being recorded. We increased this number to 15 for our largest data collection, but used as low as 5 when testing preliminary models. “NUM_PEOPLE” is the number of people from whom the data will be collected. This value remained 1 for most of the project, but was increased to 5 when we wanted to collect data for our whole group. After one person records all their gestures, the program pauses and waits until “ENTER” is pressed to record for the next person. “ACTIVE_CHANNELS” is the EMG channels recorded. As we only used one for the duration of our project, this was only channel 0. However, this could be expanded to use all 4 of the EMG channels if needed.

The “GESTURES” variable is a Python dictionary that assigns a numerical value to each gesture that will be recorded over the course of the data collection. This variable will need to be changed if gesture classification is expanded further than just fist squeeze and rest.

Main_Train.py

The first relevant variable in the training file is the dataframe variable, “df”. This variable must be correctly edited to load a dataframe of the correct .csv file that was just saved in the data collection step.

The next two variables are “window_size” and “overlap”. Window size refers to the number of samples that will be used to make a classification decision in the CNN. We found that increasing this variable increased accuracy but decreased responsiveness, meaning that transitions between states were a little slow. The “overlap” is the percent of the window size that will be shifted to analyze the next window. With our window size being 100 and the overlap being 0.25, the first signals 0-99 will be analyzed, then 25-124, 50-149, and so on. This remained at 0.25 over the course of the project.

Our neural network is defined in the build_cnn_model method. The neural network we defined and used was a fairly simple neural network, using only a few layers and classifying into 2 outputs. The neural network is adequate for a small number of outputs, but should be expanded if a more diverse gesture classification is pursued later.

Towards the end of the file, 3 files are saved. Similar to previous file names, these must also be changed after each run through the pipeline. We typically kept the suffix the same for all of our files for simplicity.

Main_Prediction.py

At the beginning of the prediction file, the 3 files saved at the end of the training are immediately loaded. Once again, make sure these file names align with what was just saved in the training portion of the pipeline.

Similar to the data collection file, “ACTIVE_CHANNELS” and “GESTURES” are defined. Same as before, make sure only the EMG channels that data was collected from and the model was trained on are used. “GESTURES” will need to be changed if more gestures are added later.


FAQ's

Q: None of the input buttons work in Unity after I loaded the project package. How do I fix it?

Dr. Crawford’s input controls work on Unity’s legacy control system. By default, the Unity Editor will use the new input system on a new project. Go to Edit -> Project Settings -> Player and change “Active Input Handling” to “Input Manager (Old)”.

Q: The player object doesn’t have gravity applied to it, and none of the collisions are working. How do I fix it?

Go to Edit -> Project Settings -> Physics and change GameObject SDK to “PhysX”. Then go to Physics -> Settings -> GameObject and change Simulation Mode to “Update”.

Q: Why was machine learning kept out of the final product?

Although we came close to implementing machine learning into the final project, we ran into issues with the performance of the game in our hosted web environment. We had staggering frame drops that we could not solve before the end of our last sprint. We also had issues getting consistently accurate results from the model in the web environment, despite the model performing strongly when run in an environment separate from the Unity game.

Q: Is it necessary to save every single file created over each data collection and model training iteration of the ML pipeline?

In our case, we felt it was in our best interest to hold onto older models in case a training iteration greatly reduced the accuracy of our model. If you feel the need to declutter, feel free to delete older files that you feel will not be needed any longer.

Q: Can the existing 1D CNN be used to classify more gestures?

The existing 1D CNN is a good baseline for gesture classification, but it should be expanded if gesture classification goes beyond just a few gestures.


Documentation Video


Contributors

Noah Morgans

Cade Dees

John Byrd

Josh Hipps

Sam Daly