Thursday, September 27, 2018

Smart Mirror

RPiCopter, unity, and graphics are all halted with the intent of returning to them when interest or usefulness piques again.

So I am building a smart mirror now!

Here's how the Idea started:
I am moving in to a new place, and my friends have been giving me grief for not having any wall decor. So, I decided to make my own. I had the idea of a Iron Man-esque setup with my own Jarvis helping me out and stuff, as any true mechanical engineer does (and should) want. Step 1: Jarvis.

I debated between using Mycroft open AI, Alexa, and Google Assistant as my base, since I'm obviously way out of my depth in creating my own AI. I settled on google assistant because I'm a Google fanboy (Google please hire/fund me), and also because they have the most straightforward API for the Raspberry Pi 3. Took me several tries, but got it to work, plus they have a Python API, so don't need to work too hard to figure out how to use it.

Hardware wise: I am using a raspberry pi 3, ultrasonic sensor, 7'' HDMI LCD display, logitec webcam, a usb sound card, 3.5mm speakers, and a 3.5mm simple desktop mic.

For the prototype mirror, I bought a mirror reflective window tint film on amazon, and a 12'' x 12'' photo frame form Michael's. I removed the glass from the frame, and applied the mirror film, and replaced the glass. then in the backboard of the frame, I cut out two large rectangles: one for my LCD screen so that it would be in direct contact with the glass, and another one for the camera for the same reason. I blacked out the front of the backboard with black construction paper, and temporarily taped the parts in place. I then attached the ultrasonic sensor to the bottom edge of the frame, because unlike the camera, it is not going to work through the semi-transparent reflective film.

the film doesn't really create a mirror finish, and the reflection looks blurred, but it really is good enough for a prototype. the mirror is ideal under certain lighting conditions. and in others, you can clearly see the mounted display and camera, or the display is too bright and looks a different color. The latter situation will almost definitely be fixable in the future with larger and more affordable OLED displays. because the main reason is because of the backlight.

The backlight is always turned on, so even when the color of the screen is set to black, i.e. nothing is being displayed, it still emits some light. This is annoyingly noticeable when you turn the lights off in the room: because there is more light on the screen side of the mirror than the outside, all light travels out, and basically turns in to a night light. OLED actually turns off pixels, so there is no backlight effect.


As for the product itself, here's what I envision:
The mirror will be completely voice activated. when you walk toward it, it will turn on, and welcome you, allowing you access to a multitude of features. all other times, it will merely be wall decor. It will have the ability to shown you many preset features including: weather, to-do lists, stock trends and prices, web articles, etc.. 

The current Idea I am running with however, is that it is a daily reflection (hehe... he... he... get it?) tool, designed to help you form helpful habits in the beginning and end of the day. Such habits could include: morning posture correction, morning routine drill for tooth brushing, flossing, water drinking etc., daily audio journal, daily thankfulness/mindfulness activity. Any and all of these functions could be selected via a paired smartphone app, to allow a personalized user experience. 

 I have been working on this off and on (more off) for a month,  and I have the display, a basic UI, google assistant, the ultrasonic feature, and the camera working. it was suggested to me to put the camera in there for many possible future functions, for example: personalized displays from facial recognition (working on that right now), timelapse collage of photos, security camera mode, video calls, emotion recogniton, and stress level detection form photo analysis.

Work in progress, will add pictures soon. possibly could sell something like this if I made many quality improvements. Until then, it'll be just for me. And it'll be really cool. And you all will be jealous. Google please fund me.

Tuesday, May 17, 2016

Raspberry Pi Quadcopter (Almost 1 year update)

So I haven't been working on the Quad very much. I took it to college with me, but I couldnt find the time, space, or motivation to make any significant progress with it. the problem I was having last winter break, i believe, was that the code from PiStuffing worked, but The Quad just kept accelerating to its maximum thrust, and blew off its propeller caps. At first, I thought that this may be a problem with the PID gains, because the guy who built the code probably had a quad with different specs and mass properties. I thought I would have to somehow find the right gains for my design.

Then just recently, I had a different Idea. I realized that the commands that we were giving the quad were just simple velocity commands, and so the gains had nothing to do with the constant max thrust. Even if the gains were wrong, we would still see a constant thrust(PWM Signal)rather than an increasing one, when the velocity was set to a constant. The problem is not with the code. It is with the experiment. When testing, I did not want to destroy my ceiling, or fly in to international airspace, so I had tethered the quad to the ground using thread, with some slack.

When I ran the tests, the incorrect gains caused the quad to thrust up a little too much but because of the thread, the quad could not rise effectively. The PID took the small velocity change as an input and we needed more velocity, so it increased the velocity as an output. Now the thread was pulled taut, and velocity was zero, so the PID just kept increasing the thrust until the max thrust was reached.

I need to find a way to test the quad in a more effective way.

Solidworks and Matlab

robotics is an expensive hobby. with all of the electronics parts and the structural components, and the power supplies and the testing equipment, it is hard to find an easy way for me to build some of the ideas i had in mind. for example, I wanted to prove to myself that I could fully design and build a robotic arm, but i have no idea where to start, since randomly buying servos and parts and hoping it works is not good engineering practice, nor is it cost efficient. 

I talked to one of my professors, Dr. Panogiotis Artemiadis, about using some of his EMG devices to control a simple robotic arm, as an interesting project. he agreed to help me (and I am extremely thankful), but first I have to have a robotic arm to use. I decided that instead of spending so much time and money on building a real arm, why not design one on SOLIDWORKS, and simulate the motion and control systems in Matlab. I learned both of these softwares inn these first two years of college. So i created a very rudimentary robotic arm, with only two degrees of freedom, meaning two arms, in SOLIDWORKS. 

Once I had that assembly made, after a little bit of online research, I Installed the Simmechanics toolbox in to Matlab, and the simmechanics link in to solidworks. Then I exported the assembly as an xml file, through the simmechanics link, which seems to be the only way to actually import a model in to matlab/simulink. When I opened the xml model of the arm in simulink, the progrram automatically had created a project with all of the individual parts from my assembly, with all the relations put in. 

When I ran the simulation, the output was a 10 second video of the arm starting in one orientation, and then moving based on the effects of gravity and the relations and geometries of the parts. Because it was just a two joint arm, it modeled like a two arm pendulum, with very complex motions. 

I realized that the simulink model did not take in to account any collisions between the parts, as the arms freely swung through the base, despite both being solid parts. I am now going to focus on adding angle constraints to prevent part collision. I will also add a PID control loop, with a step function, and joint actuators at the hinges, to be able to control the angle, and the measure the torque necessary to hold up the arm. 

That max torrque will tell me how powerful of a servo will be necessary to power the arm and not break. 

Friday, March 18, 2016

Does technology drive new product development?

Whenever I see a new piece of tech come out to the market, and get mad publicity, it seems like there is always some crazy breakthrough research that is being used. How do I go about coming up with creative ideas for a useful, successful product, if I dont have the research to back it?

My current creative process involves using technologies that I found on the internet, mashing them together with preexisting ideas I have from my own experience, and part by part, work the idea in to a product. Then when I check online if anything like this already exists,  it turns out that it does, and it is 100 times better than the one I thought of. 

Either that happens, or my Idea for a cool invention is just so completely bizzare and unnecessary, that I can't justify turning i in to a reality. 

For a truly revolutionary idea, in the past people, with revolutionary ideas like Tesla, or Google, had truly original ideas, that were ahead of their time. Maybe taking inspiration from current new tech is the wrong approach,  Maybe the best way to do this is by changing the way I approach coming up with new Ideas. Maybe I can try avoiding new tech all for a while and see what happens. I draw inspiration from it, but If i stop that, perhaps I can  solely focus on looking for real world problems that need to be solved. That is, of course what most experienced people suggest. 


Monday, March 14, 2016

EEG vs EMG

In the last post, which was a while back, I was discussing some Ideas I had about the EEG headset and future technologies. The obvious problem with that is the fact that the tech is so underdeveloped due to our minimal knowledge of the human brain, that we cant hope to create any worthwhile robotic applications in the near future.

When I was discussing this, a friend of mine pointed me toward another similar tech called Electromyography, or EMG. It uses actual muscular stimulation as the data input instead of extrapolated neural signal patterns which currently have a very low degree of accuracy. With the EMG you place electrodes similar to the EEG electrodes, on specific locations on the muscle you are targeting. The sensor detects electrical signals sent to the muscles through nerves. This signal has a direct influence on the tension or relaxation of the muscle, and there is almost no way to interpret the signal incorrectly.

MYO is a product already available for purchase, that uses this tech for basic audio/video control, such as pause, play, fast forward, rewind. https://www.myo.com. It is an arm band that detects gestures, and relays them to any device with bluetooth. It is also open for developers, and people like https://www.youtube.com/watch?v=nDeOFxhH5lY have used it in basic robotics as well.

I want to go a step further, and use the EMG tech to develop a fully prosthetic robotic arm/robotic sleeve that slips on to the arm.

The device will be an extension of the user. There are endless possibilities. Spiderman webs shooting at one end? a specific hand pattern to unlock, improving security? hydraulic shoulder extension to punch harder? It'll be like Inspector Gadget! This is within our current capabilities as engineers.

Wednesday, January 27, 2016

Brainwave Reading Applicatons

As mentioned last time, This new technology has limitless applications. It is one of the first steps towards integrating technology in to the human body.

The mind is a highly versatile and flexible tool. Techniques such as hypnosis, as well experiences like learning a new skill or habit show that the brain is not completely hardwired, and can be rearranged to link one mental thought pattern(trigger) to a completely unrelated action. This means we can link different thoughts with specific effects that they have, caused by the EEG headset. This has been most prominently used in prosthetics. New experiments have shown that artificial limbs can be controlled by thoughts through the headset, in a natural fluid motion. Obviously, you're not using the same neural pathways to control the artificial hand as the ones everyone else uses,  however the different pattern becomes second nature after constant use. That is how we learn

A direct extension of this is the use of EEG headsets to control a full robotic body!

-Jump cut to the scene form James Cameron's Avatar(2009)- and now replace the blue humanoids with the robots from iRobot. HOW COOL WOULD THAT BE!!

Of course this level of thought reading is probably hundreds of years in the future, but the implications are so beautiful. Currently, the commercially available headsets have a very narrow range of functions. The MindWave specifically, reads raw data, attention, relaxation, blinks, and a broad spectrum of brain wave frequencies. While the relatively mediocre technology reads a larger band width, it can not effectively read more detailed fluctuations/patterns. 

I was looking at the problem of controlling an RC car with this tech. There are four basic commands needed to operate: forward acceleration, negative acceleration, turn left, and turn right. The headset only reads two mental states: attention, and relaxation. 

Some people have approached this problem by only using the headset for part of the controls. for example, use a joystick for controlling direction, and the headset for moving forward or backward. I don't know about you, but this seems pretty boring and unnecessary to me. I want full control. For this, I think we need to experimentally find new mental states/brain wave patterns to use. 

To be continued...


Monday, January 25, 2016

TELEPATHY!!

Recently, while I was sitting in the university library, letting my thoughts wander, thinking about the technology of the future, I remembered a ted talk I had watched a couple years ago, about a headset that could read your thoughts! It was pretty far developed at the time, but I never heard of it actually taking off. So I looked in to it again, and what I found was a Gold Mine!

There are currently So many businesses and research labs designing and  working with small EEG (electroencephalogram) scanners in the form of a headset or a cap, that read your thoughts. The scanners read a wide range of electromagnetic waves from your brain, and neuroscience has correlated the different frequencies to different thought patterns.

Currently, the most common application for these devices, is treating mental health through biofeedback, or helping the disabled. However, several low cost scanners have also been added to the market for enthusiasts and developers.

The Potential is Endless for this tech. It can be integrated into robotics, internet of things, and data analytics, just to name a few. The best part, is that while the actual devices are a minimum investment of $100,  the software to operate them is virtually free, making it a developer's dream!

Neurosky is one of the products I was looking in to. They have a cheap EEG headset with Bluetooth capabilities, and furthermore, have made it a Point, to invite Developers to use their product. While I don't doubt that they have some selfish motives for this (publicity and increased market size), giving the people a chance to experiment and play with it is a big step in pushing technological development forward.

I am eager to invest in a Neurosky headset, and experiment with it. In my next post, i'll explain some of the ideas I had in using this technology and its future, and also how I plan to use it.