cobright
DIS Veteran
- Joined
- Jan 6, 2013
- Messages
- 2,758
Your job is really cool. I am a science person, and my DD16 is currently taking college level computer science courses so hearing about this stuff is always neat!
Thank you. I ended up with a much much more technical job than I ever intended. I have a Masters degree in International Relations and Global Trade. Then before I could find a job in my field I created a large bronze architectural sculpture for a developer friend and ended up with a slew of commissions. Work as an artist generally ended up with me 'working' 4 months, then taking 8 months off. That left me time to for an old friend starting treatment for pancreatic cancer that was not expected to work. Disney is her safe space too so we started going often as possible. Right away her treatment wrecked her mobility and the scooter her insurance put her in was garbage. I picked up a 2nd hand Jazzy (usually a sad story comes with a 2nd hand wheelchair ... oof) and started tinkering.
The computer science came later. I just figured, we have the technology to build a $500 toy drone that can fly itself from one point to another, while avoiding obstacles and people, and then land on a moving platform; but at least a dozen times a year I hear someone say they don't rent an ECV at WDW (or worse, get one for everyday use) because they are afraid they'll wreck it or clip someone.
So I had to learn some programming. In the last 2 years the hardware optimized for AI, Computer Vision, and Machine Learning tech has exploded. Like the first step the control system does is look around and identify all the things around it. It does this mainly so it can decide which things it sees are stationary and which are moving, and of the moving objects it wants to know what they are because certain types of moving objects move in particular ways. My favorite demonstration video for this step used to be this clip from the movie Skyfall...
The cool part of this step is that this software, YOLO, is running in real-time, taking the video and identifying and classifying dozens of independent elements nearly instantly. The system I have running on Aisling's powerchair is able to do this level of video processing for 8 high-def cameras simultaneously. The computer board, an Nvidea Jetson Nano, is only $100 and is the size of a pack of smokes (whatever those things are).
The computer then analyzes subsequent frames and determines the direction and speed of everything and then, based on its past experiences observing each type of object, it predicts where that object will be relative her chair for the next few seconds. This is where the eye-contact determination I mentioned before comes in. The computer knows that a person who actually looks at and sees my friend in her wheelchair is less likely to abruptly turn and walk right in front of her. I wrote the code to check for eye-contact into the control software, meaning I cut and pasted it from a library where it was designed for digital cameras so they won't take a photo unless everyone's eyes are open. But that's all I did. Once the program had that bit of data assigned to people-objects it learned over the course of a few thousand hours that those people who made eye-contact behaved differently, and it refined the way it predicted their movement to reflect that.
I'll stop myself now because I really am just running on. In my defense, the subject is pretty fun to consider, really it's the stuff of sci-fi from 10 years ago. And it really is a shame that mobility applications seem to be the last thing on industry minds when this tech is developed.
I love this stuff even if it is almost certainly how the robot uprising begins.