Just finished 7 days of hardcore hacking with team Bergstat, thereās a special kind of euphoria you get from the heady mix of sleep-deprivation, intense concentration and seeing your algorithms churn out magic solutions . This one was a week-long hackathon run by Deutsche Bahn to predict staff scheduling conflicts in the rail network. Oh and to make it more fun it was all in German but at the end of the day itās just algorithms and maths all the way down.
I spent some time building a machine learning system that can tell the difference between apples and aubergines: š¤·āāļø - well, actually it can tell the difference between apples and anything that isnāt an apple, but the key point is that during training the system was only ever exposed to images of apples. Getting it to then recognise when something isnāt an apple is actually surprisingly hard. This is called anomaly detection and itās really useful when you want to create systems that detect when something out of the ordinary has occurred, but you donāt know in advance what that out-of-the-ordinary thing will look like. If youāre interested you can read about it on Medium.com/@judewells
.
.
.
#tensorflow #autoencoder #machinelearning #computervision
Our team Pressure X won a £1000 prize from the BAPRAS hackathon for coming up with our system that uses machine learning based pose classification system to prevent the development of pressure sores in hospitals! Super proud of our team which was comprised of myself and two awesome medical experts @ladidadiana@davidzargaran - honestly had so much fun working on this project for the last 30 hours! We used tensorflow for real-time pose estimation, and a custom k-means clustering algorithm to classify the distinct poses.
Guinness World record for standing high jump is just under 5ft 4inches. I think Iām cheating a bit by using these foam blocks which have probably been squished to be a bit shorter but on these I can do 5ft from standing, and Iām trying to do 5ft 6inches with a run up but canāt quite yet: hereās a compilation of my many fails.
#boxjumps #boxjump #peckhampulse
A neural network is just a giant mathematical equation, in a complex network there are a few million numbers in the equation that need to be tuned to make the equation achieve some objective. Tuning the numbers based on data is the essence of machine learning. I used a generative adversarial network (GAN) to make this video. The network has learned an abstract representation of 1000 image categories including things like āanalog clockā, āsea urchinā and āteddy bearā similar to the human brain, the āconceptā or essence of each of these image categories is distributed as patterns of connections between thousands of neurons. I heard a story about a person who had brain surgery without general anaesthetic and during the operation the neurosurgeon could evoke memories, tastes and emotions by just electrically stimulating different parts of the brain. To me that story says a lot about consciousness and about how our model of the external world is encoded in our brains. Thereās no technology that lets us experience the world in the way that a dolphin or a bat does, but Iām pretty sure the world looks very different from that perspective. I love the technology of the GAN because it kind of lets you peer into the mind of the machine to visualise the machineās learned understanding of a clock or a teddy bear. To make this video I told the GAN to generate photos of 25 images. I also specified that it should interpolate 60 steps of moving from one category to the next, so the first image is 100% praying mantis, the second image is 99% praying mantis and 1% cicada, it shifts the balance until it makes an image that 100% cicada and then gradually starts evolving towards the next image. In total it generated 1520 images that were used to make the video.
#biggan #gan #generativeart #generativeadversarialnetworks #phenomenology
I was starting to wonder if my generative adversarial networks were really learning anything at all. All the pictures were just coming out like blurry watercolour paintings. So instead of training them on photos (which are complicated and have multiple colours) I set up an experiment to see if they could learn an abstract representation of a circle. Getting a computer to generate a circle is of course ridiculously simple if you tell it to draw one according to the mathematical rules that define a circle, but in this case I didnāt tell the algorithm any information about what a circle is: it just had to learn by looking at hundreds of example images of circles. This is what it came up with: 36 separate images according to its learned understanding of a circle. It does show one interesting phenomena of GANs: they canāt count - the training set only contained one circle per image, but the GAN canāt learn the difference between one and many. You can also see some of the GANās attempts at representing images of baby deer - the last image is an example of what it was trained on.
.
.
.
#GANs #generativeadversarialnetworks #machinelearning