Spoilers of what I pulled off |
With the hardware gathered and the lights working, the next step was to put the lights in the tree. I took a bag of zip ties that I think I bought with the intention of using the for cable management in my computer (I bought way too many, though). For the most part, this was pretty uneventful, but there were a few things that I would do different if I'd do this again. The first is that I would plan it more carefully and use more zip ties and attach in more places. The second is that I dislike the colored wires between the lights, so I think I would sleeve them in something green to make them stand out less.
The colored wires do stand out... |
I also wanted wifi on my raspberry pi, so I wouldn't have to run a network cable to it. As I realized last time, the device itself doesn't have wifi hardware, so I went looking for wifi adapters. I found two. Both are old and were probably cheap, but they were worth a shot. The first adapter didn't get recognized by the Raspberry Pi, but the second one worked well and after some messing with the configuration, I was able to freely move the device anywhere I had power.
Getting wifi running |
The next part was the real challenge: mapping the LEDs to three dimensional coordinates. I did actually send an email to Matt Parker to see if he could share the code he used for this, but unfortunately, he didn't have it in a usable form. Not wanting to wait, I started working on my own.
Getting the camera to take a picture was both easy and hard at the same time. Basically, copying some code from the internet and saving an image wasn't that hard. However, there were basically three libraries to choose from, and none of them was exactly what I wanted and I definitely didn't like that you have to install non-python dependencies for each of them. In the end, I went with pygame, which uses sdl2 under the hood.
My complete mapping setup |
There were also several issues with the camera that I had to solve once I started using it more extensively. The first was that the first photo would often have different brightness at the top than at the bottom, with a clear dividing line about a fifth from the top. However, this didn't show up at first, as it starts happening when you run the program more than once. I solved this by taking an image from the camera on startup, discarding that and then working only with the images taken later on.
The next problem was something I started noticing when I was working on the mapping: the photos taken were often from well before I gave the command, sometimes by ten or more seconds. I got around this by re-initializing the camera every time I wanted to take a picture, as then the picture would always end up being current. To prevent the previous brightness issue, though, I had to discard an image each time I did this.
With the camera working, the next step was to start identifying the LEDs and their location. The idea was simple: turn on a LED, take a picture, identify the brightest pixel in the picture and repeat this for the next LED. I also added a step of drawing some lines on the picture, so I could see the result.
Pretty orange lines |
The first result wasn't bad for some of the LEDs, but just going with the brightest pixel didn't quite work for others. So, I changed my code to take the average of several bright pixels. My first implementation was quite bad and allowed a gradual increase in brightness to shift the average towards the top of the image. But even after fixing that, the results weren't to my satisfaction at all. So, I decided to take a different approach. Instead of looking for the brightest pixels, I just look with pixels that are above a hardcoded brightness.
For the most part, that gave pretty good results. However, when a lot of light was being reflected, the reflection of the LED would sometimes throw off the calculation. The way I solved this was by lowering the brightness of the LEDs. Originally, I had assumed that full brightness was a good idea, but in the end about 20% brightness gave better results because there were far fewer reflections that were bright enough to be taken into account.
Next, it was time to do multiple scans and combine the results. Doing multiple scans was straight forward: just put the code in a function and all that function four times, with a bit in between where tell the user to rotate the tree. For combining, I gave each mapping result a score, based on the brightness and on how many pixels were involved. Then, I took the y coordinate from the highest score on any of the four scans, the x coordinate from the best among the front and back scans and the z coordinate from the best of among the left and right scans. And those results I printed to a file.
In order to check the results, I quickly threw together some code that read the file (this was basically a copy-paste job from Matt's code) and then changed the color of each LED in order of either the x, y or z portion of the coordinate. And I must say, I'm not at all displeased with the result. It's not perfect, but it's quite good, especially if you consider this was based on the first time I did a full scan of the tree.
There are still LEDs that pretty clearly have inaccurate coordinates. So, that's what I'll have to do for next time: build the tools to see what LEDs are wrong and correct those...
Oh, also: my code is now on GitHub.
No comments:
Post a Comment