RPS Vision ML
machine-learning

RPS Vision ML

banner

Rock, Paper, Scissors (RPS)

This is a webapp that uses Machine Learning to identify hand gestures from RPS game.

It is developed in Tensorflow and deployed to train and predict on the device.

The model has a predefined dataset that was used to train it up too certain accuracy.

It can be trained further capturing your own gestures.

Because it is deployed on the device, none of the images are stored online, all of them are on your device and will be used to train the model.

To improve current model capture rock, paper and scissors samples by clicking the respective button and then click "Train Network".
Rock Samples:
Paper Samples:
Scissors Samples:
Click 'Start Predicting' to see predictions, and 'Stop Predicting' to end.
Once you are happy with your model, click 'Download Model' to save the model to your local disk.