My thoughts exactly.
Vibration Sensor / train sensor
Great to talk yesterday, will get you some data soon!
Yes, good to meet you and the others at the space. Ping me over some data when you get a chance and we can catch up when I’ve processed it.
I’ve worked with Network Rails APIs before if you need a hand. Done in Python with a library I can’t quite remember right now.
I’m looking at what can be pulled from the Darwin API - may be complementary to your vibration data.
Train Describes probably a better bet for something raw like where a train is on the tracks. Darwin is… only as good as the TD/TRUST data it gets fed, and you don’t get the raw step data (even on the push port feed)
See more of what I’ve worked out so far here
Not yet sadly, between holidays and work I’ve not yet set up csv recording though I’m a few lines away. Another reason for the delay is that I wanted to minimise the noise from the sensor, so the signal is clearer and massively reduce the dataset size.
After some googling I came across some tutorials in PyWavelets and I don’t understand the maths behind it, but it works pretty well (see graph below) original in blue, denoised in red. Going for a square-ish output for minimal data size. This is with the knocking example, so threshold may need to be changed, but we can play with that. What time are you around tomorrow, we can maybe look at best options tomorrow?
I’ve written functions to take 100 samples, denoise it, then save it a csv, I just need to plumb it together - half finished code on the pi in the space so I wouldn’t run anything on there. I could also FTP that data to someplace, if you have place I can send it?
What frequency does the accelerometer sample at? 1000Hz? Depending on what method we use, we might get better results with raw data. The problem with de-noising things is you are throwing data away…
Let’s say we look at 5 seconds of data per sample, which is 5000 data points (at 5Hz) per channel, so 15000 points. It’s not by any means an unreasonable amount of data to push through a classifier.
Anyway, should be around all day tomorrow. I have a very small amount of work to do in the morning, then I’m going to be making stuff!!
I mean 1000Hz, not 5Hz!
Do you think it might be possible to work with @systems to install the networking and maybe power a bit better, there are cables stretched over the social area making it difficult to use the ports for other uses, I had to disconnect the printer the other day as I didn’t want to interrupt this project, there must surely be a switch on top of the toilet that could be used?
Thanks for not unplugging it. I put a 8 port switch in I found in the snug, it’s behind the printer. If it becomes a permanent install I’ll do something more tidy.
I’ll get you raw data then. It’s sample rate is much much slower than that also, more like 100-200hz
I can hopefully get it recording tonight and then you can grab what they’re is before you leave gives you a 24 hour data set. Will you be using a machine learning type thing that you ’ train ’ with sample data?
Great! I’m not sure, to be honest. It would be interesting to try this type of approach… It might work very well, but also might need quite a bit of training data to get robust results.
As a first step, let’s do some more basic signal processing and see how the data changes when a train is approaching.
Sounds good to me, it’s saving 10000 rows of xyz data plus timestamps into each csv file, filename is timestamp also in ~/trainpi_savedata on the Pi, 10.0.4.45, stock jessie username and password.
There’s uncommitted changes in the slms_vibrationsensor report, also in home folder, so don’t do anything drastic in there
When you ssh in it’ll be running Byobu ssh manager so you’ll need a cheat sheet of commands if you’re not familiar with that. Great tool if you’ve not used it before, if you’re on Mac it’s F2, F3, F4, F6 to create new shell, go ‘left’, go ‘right’, and detach, respectively. Never figured out the ctl key based shortcuts.
Any problems let me know
You get any data Tom?
Anyone else is free to download and analyse the data, @frasco and @peter_hellyer were threatening to come up with approaches…
Here’s a tiny sample of 1000 data points, but for the first 2/3rds or so there was a train thurdunking overhead, so gives an indication of what train / no train is like - https://docs.google.com/spreadsheets/d/1cKOqdhBzu6c617GZqjiRomedgEntFZoqQNHxkxnReAI/edit?usp=sharing
Here’s a few days of data - https://wetransfer.com/downloads/f7b674e70603b65305a00ebe1e1bbea520170422113742/b165e7ce17f43ee035264ec3e3ac6bda20170422113742/af863f
If you’re saving raw data, you might want to look at writing binary rather CSV data. While I’m pro-plain text, you’d be amazed at the efficiency, and this is not archival data.
Off the top of my head, Python’s pickle will read and write python data structures to files with the utmost of ease. Or you just pack some bytes and append…
Did you guys see that the doors were triggering large spikes?