Human Chasing Camera 1.0
During last week I developed a circuit which involved connecting a servo motor to an Arduino and control it using a mouse. This week’s project consisted of developing this idea further on. In addition to this, I wanted to used the ml5.js machine learning library made for P5.js to control something physical in the world, in this case the servo motor.
It is common knowledge that machine learning algorithms like the one I have used for making these projects are being used today in surveillance systems around the world. Among these sites there are varying degrees of privacy disrupting mechanisms which correspond to geopolitical forces and control mechanisms.
To know how to prepare against these types of surveillance systems effectively it is necessary to learn how it is done.
The system consists using a an image recognition software which tracks your body position and uses this data as a way to control the movement of a motor with the camera attached on top, Effectively giving the sense that it is chasing you.
The HS -311 servo motor has enough torque to move the webcam easily, it works perfectly for my purposes.
A more detailed description of how the connection was made between the Arduino and p5 can be found in my last blog post.
ML5 and Posenet
The ml5 library has an interesting set of functions derived from Tensorflow.js and PoseNet, a convolutional neural network that detects human poses in real time.
When we use ml5 through a p5 sketch to track someone’s body position, we get a series of X and Y coordinates which correspond to different parts of the body being tracked in real-time. In this project, the camera is only used to track one point: the nose. The reason for this is that we need to grab a point which always corresponds to a front facing user. Also it seems as if the person tracked and the camera are in a conversation, looking face to “face”.
Arduino Code Repository : https://github.com/lacouture100/Intro-to-Physical-Computation-ITP/tree/master/Week_5/arduino
Thanks for reading!