Friday, December 9, 2016

We Done?

We've reached an unfortunate point now. Lab is over, and soon the entire semester will be as well. It's been fun.

Well, here's what we ended up creating in these 2-or-so-odd months.

Unfortunately Blogger is not very good with videos.

Monday, December 5, 2016

The Solutions to your Issues Lie in the Most Obscure Places

Memory Management Revisited

So remember the issues that we were having with the Photon and its memory management? Turns out C++ is pretty hard. One simple mistyped character caused a cascade of instability.

One Character?

Yep. One character.

The model we used was essentially an operating system for the Photon, with the input device being a 5 way button thingy and the output device being the screen. The system itself was composed of "input responders," a rough classification that is used for anything that responds to user input accordingly.

This model made use of both Menus and Info screens. The menus consisted of both the location menu, which gives the location of the device using Google's Geolocation API, and several first aid menus, act as collections of responses to first aid situations. Then come the info screens, which give users the information they need to respond to various situations.

The issue we were experiencing arose with the first aid menus. Though we had passed a parameter for the number of different situations on each menu, we happened to forget that while drawing the screen. Instead, we used a constant number 5, which happened to be more than the number of situations. And because this is C++, the Photon attempts to access data that it shouldn't. Sometimes, some other data was there, which resulted in the random gibberish that sometimes popped up everywhere.

A simple fix made the device stable. Or, well, almost. There was another issue that impeded progress.

Be the Better Programmer

This issue arose from my attempt to be a good programmer. Whenever I compiled the code, the compiler always warned me about the deprecated conversion from Particle's String to a char array. So I decided to create a method that would solve this issue and convert it without the trouble.

As luck would have it, this method caused more trouble than it solved. In the conversion process, random characters were added on for some reason. Perhaps it was in the length method. or perhaps it was something else. But one way or another, these random characters were tacked on.

It was perhaps these random characters that also caused similar issues as the ones we had solved beforehand. The library we used for the Nokia 5110, our LCD, made use of a bitmap for each character in order to store how the characters should be rendered. But given how these random characters were not in the range of that array, once again, the Photon begins accessing garbage data once again. Best case scenario, it renders as blank. Common scenario, it renders as random data.

Turns out that, like most deprecated things, support still exists. So instead of going through the conversion ourselves, make the Photon do it for us. It works just fine that way.

So why the Crashing?

To be honest, I can't tell. My best guess is that either the bitmap or the array was allocated near the end of memory, so accessing out-of-bounds data resulted in attempting to access memory that doesn't exist. And the Photon responds by crashing.

Can't blame it.

Of course, this is my best guess. Perhaps someone at Particle could explain this much better than I ever could.

Monday, November 28, 2016

Memory Management Fun

Flash?

There comes a time in every programmer's life when they ask themselves, "how much memory is my code using?" Now is that time.

Background: what are we doing?

Being a first aid kid addon, the logical addition to our product is a way for people to find out first aid information in order to help people in trouble. So, of course, we decided to go and make essentially a new operating system for the Photon to run off of. What could go wrong?

C++ is Fun (this is a lie)

Though it's quite widely known and I do have a bit of experience, C++ still posed quite a few challenges in programming. To put it bluntly, I'm still working on it. The inclusion of libraries and a lot of other factors have made this project quite a bit more complicated than I would have liked, but this is the life that I've chosen.

Crashing and Memory Management

The Photon and its firmware are both quite fickle, and the slightest perturbation has caused countless crashes in our testing. Code that works fine one moment crashes the next, sometimes without any changes, though rarely in such a case. Much more common is random crashing caused by removal of code that has nothing to do with the cause of the crash!

When I was first prototyping the code, I decided to include an abbreviation field to store a menu's abbreviation. But I never used this field, and so I believed that I could free up some critical space by getting rid of this space. No dice, however, as the program crashed as soon as I pressed any buttons. No debug output could explain it; and at this point this memory-eating feature still exists.

Another fun bug came with changing the information that our device provided. I noticed that some characters were being cut off, so I decided to fix it in my code. And again, it just decided to crash randomly?

This issue is very confusing, and the only explanation I possibly have is that the method of allocating memory within the code is not allocating enough. After all, I am using an array of char pointer pointers as a 2 dimensional array, so I do not know the specifics of how the Photon manages its memory. Thus, attempting to read the memory at a certain point may exceed the bounds of the array and read other information that is complete gibberish.

But it this were the case, there still remains a puzzling bug. The location menu currently has no functionality built into its center button. In the final product, pressing the center button should call the Geolocation methods we worked so tediously on before and then display the results to the screen. But right now, we are still prototyping this "Operating System" so that functionality is not included. Despite this, pressing the button appears to be a toggle of some sort. Specifically, toggling whether some gibberish appears on the menu or not. We added tracing calls to see whether some other subclasses' method was being called instead, but it was not. The code within the method itself is just the tracing output.

Color me surprised. These random crashes happen for no apparent reason, either. It's quite frustrating trying to develop in such an environment. Perhaps it would be better to try and develop something similar for a computer instead, just to try and flesh out the concept.

Thursday, November 10, 2016

Video Editing is Hard

In Our Best Interests

To not watch the proof of concept video. But if you insist, you can check it out here. Because I don't want to upload the video again.

Also it uses flash. Did we just get out of a time machine a decade in the past?

Broken Code and Broken Dreams

Push to Master

Don't do it if your code doesn't work. Please. And if someone opens an issue, please work on resolving it.

Sorry

I had to get that rant out of my system, I realize that many of these Particle library devs are volunteers who do what they do out of love. So I'd like to take a moment and that all the people working on Open Source projects right now. Y'all the real MVPs.

Cutting it Close

Our proof of concept's requirements weren't really that stringent. That being said, I did want to add as much functionality as possible. In fact, I made an entire post about getting the Geolocation to work. What didn't make the cut into the proof of concept is somewhat more interesting.

Hardware that Failed

Our Nokia 5110 LCD came with a part that we had trouble identifying at first. Turns out, it's a Texas Instruments CD4050BE, a fancy part that made wiring much trickier. In fact, we weren't entirely sure what its purpose was. With a little digging, we found out that the LCD worked just fine without this part. And sure enough, it did. So now there's a CD4050BE just sitting around in our design lab. In case anyone needs it.

The other component we were playing around with was pulsesensor.com's (appropriately enough) pulse sensor. In our demo testing of the hardware, we were able to get it working just fine. But that was 2 weeks before our proof of concept demo. We were content with it working, and so decided to move onto the LCD and Geolocation, figuring that getting the pulse sensor working would be easy enough.

Turns out, it's not that simple. In the two weeks that had passed, the author of the code released an update. Appropriately enough, our code broke. It was time to go hunting for a reason. Fortunately, Particle's online IDE's libraries make use of Github. The repository was just a click away. And checking the changelogs and the Readme gave us an answer as to why it was broken. Previous versions had included a dependency library along with the library code. The latest version separated them, so that it was necessary to include both libraries in the code.

So we went back and did that and... nothing. Instead of complaining about being unable to find a necessary library, it was complaining about an error within the library itself. Quite puzzling indeed.

We still had a working demo with example code, though. So we booted it up, and sure enough it worked just fine. It was running version 1.5.0, while the newest version was 1.5.1.. So instead of including 1.5.1 in our proof of concept code, we decided to go with 1.5.0. After all, the example code worked fine. Why wouldn't the rest of it work?

Well, for some bizarre and unknown reason, that code broke too. Same exact error as the 1.5.1 library, even though the example code worked fine. Our demo code with the 1.5.0 code still worked, and so did reusing the example code for 1.5.0. But when we tried to migrate the Pulse Sensor code to our proof of concept, it broke every single time.

At this point, we had already invested nearly an hour of time trying to get the pulse sensor to work. No matter what we tried, it simply refused to. So instead of wasting more time, we decided to cut it. For the proof of concept, we only really needed a location and a screen display that location. So we decided to work on those for a change.

Hardware that didn't quite Fail

To be quite honest, hardware and software are both hard. And for us in the Hardware IoT section that involves programming our devices, getting both to work together is even more difficult. Case in point - another many years of time used getting the screen to work. Aside from cutting out that weird Texas Instruments part, we did have to do quite a bit of work to get it working. And the other hard part about it - if it's not working 100%, it won't do anything.

Actually, that was a lie. The backlight controls and the screen control are separate. Aside from that, however, we had zero feedback as to where our errors were.

And speaking of errors, there were quite a few of them getting the screen to work. Our Photon currently has a significant amount of its pins in use just for the screen. Misplace any one of them, and the entire thing fails and nobody knows why. In fact, we broke it multiple times over the course of testing. Mostly, it was accidentally pulling out a wire. But sometimes it was trickier., We spent a good 20 minutes trying to get the screen working once, only to find out that somehow a wire had moved from A3 to A4. Resetting it quickly fixed the issue.

One Grand Realization

Experience really does matter in this line of work. There are so many different things that all have to be working in perfect harmony. Even though I have quite a bit of experience coming into this class, I've still been stuck on problems for many hours that turn out to have simple solutions. And it's not because I refuse to learn from my past mistakes. No, it's because there are so many possible mistakes to make, that it becomes impossible to not make one eventually. Even the most experienced of engineers will end up hooking up a wire incorrectly. Some may spend hours in their current state trying to diagnose the issue. Others may decided to simply tear down their machine and start over.

Sometimes I feel like I should do the latter.

Friday, November 4, 2016

Occam was Right

Overengineering at its Finest

From my time browsing the internets, I've come to hear one piece of engineering advice over and over and over again.

Don't overengineer stuff.

Well, it's kind of too late for that.

Geolocation is hard

With the work on the rest of the proof of concept going smoothly and ahead of schedule, I decided to work on something that wasn't explicitly in the proof of concept, but would be helpful for a more complete project: Geolocation.

The idea is that Geolocation provides an easy way for first responders to locate the rescuer and victims, especially in less-than-ideal situations such as earthquakes with rubble everywhere. The final product relies on a GPS module, which is built into the Particle Electron, but not the Proton. In disaster situations, GPS should be much more reliable, and it also comes with Particle's built-in library, AssetTracker. But GPS modules cost $40, which we already spent on other components. That wasn't an option, nor was throwing down $70 to buy an Electron and Particle's data plan. Instead, I decided to rely on Wifi access points and Google's Geolocation API.

Why-fi? Get it?

Well, because it was basically the only option we had. We're broke college kids who can't afford GPS. So basically, Wifi is the only option we're left with. So that's what we're gonna go with, and hope that it works.

The Quest to Query Google

It's a hard day for two of us freshies who have no experience with web APIs at all. I had to obtain an API key from Google in order to use their services. The free key allows me to make up to 50 requests per second and 2500 total requests per day. I highly doubt I will reach this limit any time soon, though for completed product it may be necessary to pay the $.50 for 1000 additional requests, up to 100,000 per day. Or maybe even upgrade to the premium plan. But, of course, it is highly doubtful that we will still be using Wifi access points to determine our location when GPS is available and much more accurate.

For the prototype, though, this is about as reliable as it gets. That presents another challenge: how to actually make a query to the service. Oh boy. Our inexperience really showed here. We ran through quite a few services to try and get this working, and ended up completely ditching all of them. Yep.

IFTTT

My first attempted solution was IFTTT - IF This Then That, a simple IoT "recipe" maker that allows certain action events to trigger certain responses, as is in the name. Particle provided a handy channel that allowed us to listen for changes in variables, function return values, events, and even the device status. I decided to start with an event listener. After all, it should be the least tedious to get working, right?

Well, there were a couple problems. The action trigger in IFTTT required that the contents were equal to some arbitrary parameter, so our solution would be impossible to implement via IFTTT's Particle Event listener. Perhaps, then, the variable solution would be better.

But by using the variable listener, the solution becomes even more convoluted. We can say that the variable value is not equal to something, but then, what is stopping IFTTT from using up all our daily allotment of API calls in the blink of an eye? If we make IFTTT respond by firing an event of to set the value of the variable back to 0, we can theoretically fix it. But then there are even more things flying around. Not good.

Not to mention that querying Google's Geolocation API and returning the result was not really easy, either. IFTTT provides the Maker channel to make web API queries, which is exactly what we were after. But with our inexperience, it quickly became obvious that this was not the solution we were after. Sure, we could get the request. But how would we listen for the response? The Maker channel required an event name, but what would that be? We were absolutely clueless (and to be fair, I still am). So we decided that IFTTT was not going to work.

Noodl

In lab on the 28th of October, Simon introduced Noodl to us, a simple-to-use (Compared to writing raw code) prototyping tool that had Javascript features built-in. So I decided to try and figure out how to get it to try and query Google's Geolocation API. But like anything, it wasn't exactly straightforward.

The example code included about a few hundred lines of unformatted Javascript code that was required in order to interact with the Particle cloud. And once that was done, it required even more work to try and get an HTTP POST request to Google's API and process the response. Needless to say, it didn't quite work out as planned.

Sure, we ended up with a nice button that said "geolocate," but that button or the Javascript couldn't do everything necessary. Which was disappointing.

Thingspeak

It was at this point that I stumbled upon a post on the Particle Forums that did about exactly what we needed it to. So I decided to look up the basis of this post was. Turns out, it was Particle's own system to create webhooks. So I booted up the tutorials they provided and followed the steps they gave.

Those steps included setting something up with Thingspeak. So I did that and ended up relaying the information from the Photon to the Particle Cloud and then finally to Thingspeak. At Thingspeak, I was successful in creating a POST request to the Geolocation API. Almost there!

The only issue now was to try and process the response. I had to create a listener for the response and then create an action handler to forward that information back to the Particle cloud. The only problem with that was, well, all of it. I didn't know how to do that whatsoever. And despite me playing around in Lab for about 2 hours to try and fix it, I couldn't get it.

Give up?

Nope. If you find yourself in a pickle, back up a few steps to try and see why you're in the pickle in the first place. So I did just that, and I realized something. I didn't need a 3rd party cloud service at all. Particle's own Webhooks were powerful and simple enough to do just what I needed. Thus, I resolved to create a Webhook that would connect the Photon to Google and get its location.

Webhooks

Finally, the solution is known. No 3rd party cloud services necessary. A simple Webhook is enough to relay all the necessary information to determine a device's location. The only issue is that the Webhook creation process wasn't exactly straightforward. Once again, I was left fiddling with the system for a while before figuring out how it worked at all. And, to be fair, I still don't understand exactly how it works.

Creating a webhook can be done in two ways. With a proper JSON file, you can do it using Particle's Command Line Interface. The web interface also works and is more user-friendly, but at the same time leaves users with so many options that they should be overwhelmed. At least I was.

What's more, webhooks currently cannot be edited. If you mess up, you have to delete it and start over again. This is an issue.

Moment of Truth

Would it even work, though? Well, as they say, there's only one way to find out.

I flashed the Photon with the firmware then went to the Particle App. Sure enough, the function scanWifiAPs showed up as a callable function.

I opened up the Serial monitor to read the data I would be receiving.

I called the function and waited.

The data in the format to send to Google showed up.

And nothing.

What have I done wrong?

Turns out that Google's Geolocation API is quite fickle. A quick check of the Logs showed the reason - Response code 400, which means that either
  • My API key is faulty
  • The response body is incorrect.
Except neither of those was the issue. I just called my method again, and it returned something!

Did I mess up?

I got a location at approximately 40N 80W. I plugged those numbers into Google Maps, and...

I'm in the middle of nowhere. The nearest city is Pittsburgh. Close, but not quite.

Call me insane, but I tried the exact same thing again.

Google Why

Nope, it works this time. I am now consistently placed within 40m of the middle of the Duderstadt Center, which is where I have been testing. I have no clue why any of those errors are happening. I blame Google.

Words Unspoken by any Engineer

"It works!"

Of course, all this work is only for a proof of concept at the moment. For a final product, the Particle Electron and its AssetTracker library provides easy wrappers around its built-in Adafruit GPS module, which can obtain a device's latitude and longitude with ease.

Monday, October 24, 2016

Project Time!

Team?

For this project, I have decided to team up with Maximilian Sharpe, whose blog can be found here. We have decided to work on his project idea: a toolkit to help bystanders give help to those at the scene of a disaster before first responders can arrive.

Timeline?

Here is another copy of the timeline for completeness' sake.
Project Requirements and Timeline

  • Required features for final project:
    • Can take pulse.
    • Can provide information to user on first aid measures.
    • Can send a location to a computer or to a mobile device.
    • Can take in information from the user on the type of emergency.
  • Required features for proof of concept:
    • Can provide information to user on first aid measures.
    • Can send a location to a computer or a mobile device (IFTTT?).
  • Required Parts:
  • Timeline
    • Friday 21st October:
      • Complete list of parts to be ordered
      • Complete timeline and project planning
    • Monday 24th October:
      • Order all parts
      • Begin researching first aid response to different situations
      • Begin experimenting with pulse sensor
      • Begin experimenting with sending information (a preset location) to a computer
    • Friday 28th October:
      • Receive (hopefully) all parts for the project
      • Determine how best to fit everything together
        • Glove? Wristwatch? Portable device?
      • Begin experimenting with display
    • Friday 4th November → Proof of Concept Presentation:
      • Have something printing to the display
      • Have a location being sent from the photon to anything else
    • Friday 11th November:
      • Start to integrate research on first aid into what is being displayed on the screen
      • Begin experimenting external power supply
      • Add pulse sensor in some capacity
    • Friday 18th November:
      • Market research
        • What is the problem?
        • Why does this solves the problem?
        • Who is the target market?
        • Projected costs?
      • Begin experimenting with user input into the device
        • What is the emergency?
        • How many are hurt?
        • Is it safe yet or are things still dangerous?
    • Friday 25th November:
      • Clean up loose ends
    • Friday 2nd December → Final Presentation of product

What am I doing?

We're not quite sure yet. There's no set specialization for either of us. Which is good! That should foster effective collaboration and communication.

Friday, October 21, 2016

Project Idea Refinement

Candidate:
Idea 2: Audio control system.
What problem are you looking to solve?
Audio volume issues when using a speaker system
What potential solution(s) have you considered for this problem?
Aside from the IoT solution of intelligent control of volume and other factors, there remains little that can really be done to solve this issue. Perhaps a sound engineer would know more about it. Maybe tearing down the entire room and rebuilding it for better acoustics? That’s definitely not practical, no matter how rich you are, and the return on such a project would be minimal at best.
How might you integrate IoT concepts?
The IoT solution is to have a bunch of Photons or other devices scattered around the room. They each have sensor(s) that can detect different aspects about the audio in the room, and they all report those readings to another device that is controlling the audio device of the room.
1 minute pitch draft:
“Everyone’s heard of audio feedback before. The dreaded ‘eeeeEEEEE’ that keeps getting worse until you turn down the volume or shut down the entire system. What I’m proposing is a way to solve that issue, once and for all. Thanks to the handy Particle Photon, an accompanying audio sensor, and some programming knowledge, we can hook up a bunch of these around a room and use them to automatically control the volume of audio systems so that they’re just the right volume. We’ll start small, in college lecture halls. From there, we can expand to presentation rooms, and finally into the home. The future of self-regulated audio is here.”

RIP this idea

Friday, October 14, 2016

Transmission Control Protocol is a Thing of Beauty

Particle Photons are Quite Versatile

You can do so much with a Photon. IoT stuff, yes. Especially with Particle's built-in cloud services. But there's way more than they can do.

The Basis for the Game: Pong

I've spent over a year of time by now working on a Game Engine using Java. So far, the one game implementation that exists is the classic game of Pong. The details are a bit complicated, of course, but every single game runs of a Client-Server basis. By creating a Client implementation using the Photon, it was possible to turn the Photon into a wireless controller for Pong.

Hardware

The hardware consists of a Photon as well as a breadboard, two buttons that act as up and down controls, and a bunch of wires and resistors. The breadboard itself doesn't look very pretty.
Of course, it doesn't have to. It's a prototype, and it works. That's what matters. But what good is this hardware without accompanying software to actually make it work?

Software

The software of the Photon is based on the same software as Arduinos, allowing much compatibility. In addition, Particle has provided many useful libraries in addition to the default Arduino ones that unlock additional functionality that the Photon and other Particle boards have the hardware capable of using. Among the others, there is the class TCPClient, which is what it sounds like. When used in conjunction with a TCP Server, no matter what the source, TCPClient allows for the transmission of data across the internet. Of course, in this case, the client and server are right next to each other. But by the nature of TCP, as long as both devices are connected to the internet (and not firewalled or whatever), it is possible to communicate between the devices. Compared to Bluetooth or other near-range communications, using the internet effectively extends the range to anywhere in the world.

Of course, anywhere in the world is a bit optimistic, as this is Pong after all. There is no way that the Photon is powerful enough to display the real-time playing field of Pong, at least not with the fidelity required to play it well. Thus, the Photon is effectively tied to the computer displaying Pong, a sad reality for the once-limitless Photon.

Communication How?

On the Java side, the server makes use of Java's SocketChannel, ServerSocketChannel and ByteBuffer classes. The server itself runs based on the tick model. Any data that needs to be sent is aggregated until the end of a tick, when it is sent all at once in a manner similar to Nagle's algorithm. A separate thread exists to read packets received by the TCP and UDP channels so that processing is possible as soon as a tick begins.

The Photon uses the Firmware's built-in class TCPClient, akin to SocketChannel. But before it can connect, it has to know what to connect to. That's why functions to set the host address and port are exposed using the Particle Cloud. Via the Particle app, it's possible to tell the Photon what to connect to. From there, the TCPClient instance attempts to connect to the ServerSocketChannel on the server. If all is done properly, including setting up the host, then the Photon will connect to the Computer and the game will begin.

Using the two different backends is no problem, however, as they are unified by TCP (and UDP, but it's not used in this case). On the Photon, all the packets received are discarded, as they wouldn't be useful anyways. However, it does send one packet to the server in the format of an Input Packet. What goes into that packet? Time to find out...

Anatomy of a Packet

So what goes into the packet that's sent? Well, every packet sent has its ID as a 4-byte int added to the buffer first. Then, all the data that the packet contains is added to the buffer. In the case of the Input Packet, its ID is 6. So the integer 6 is written into the buffer as 00 00 00 06. The data of the input packet is a 2-byte short that specifies the player number, and a 1-byte byte that specifies the direction that the player wants to move the paddle. The player number for the Photon should always be 1, so the player number is written to the buffer as 00 01. The direction that the player wants to move is dependent on the buttons pressed. Internally, the directions are known as 1 for up, 0 for stop, and -1 for down. However, these values are known internally as 01, 00, and FF respectively. Finally, the end delimiter is written as a substitute to the next Packet ID, which is the minimum value for a signed 4-byte int. The internal value of that is 80 00 00 00.

Overall, the packet is thus 00 00 00 06 00 01 DIRECTION 80 00 00 00.

Prove It!

The following shows the Wireshark analysis for the data. From here, we can see a few things. The Photon's IP is 141.213.30.71, so we're looking for packets sent to and from that. The highlighted packet shows one sample of input packet. Obviously, this is way more than 11 bytes of data. 88 bytes, in fact. This is due to the fact that the Photon's delay is only 5 milliseconds, which allows it to send many different inputs which are aggregated together. The server only processes the first one, as the end delimiter is encountered right after it.

Looking at just the first 11 bytes, we see that the data are indeed correct. Awesome!

Sunday, October 9, 2016

Project Idea Research

Problems that Exist Today in Education and Healthcare, and Possible IoT Solutions:

  1. Logistical/Supply Chain issues: Keeping track of inventory and stuff ordered
    • IoT Solution: Photon with special attachment mechanism – potentially 2 sets of extendable straps. Power is provided by battery, and is only turned on when both are connected. From there, all the metadata about the package can be entered via a web interface, and all the data can be stored in the cloud if desired, or possibly on local servers. The Photon is kept in a low-power mode and woken up by an inventory request. At that point, it should be delivered to its intended destination. If tracking is desired, it is possible to activate Internal Positioning making use of the onboard WiFi, or other technologies if WiFi is impractical.
  2. LECTURE HALL AUDIO EQUIPMENT being too LOUD AND HURTING EARS
    • IoT Solution: Several Photons positioned around the lecture hall, all with some sort of method to pick up audio. Use software to determine whether audio being received at these points is too soft or too loud, and automatically adjust so that the volume is as close to an ideal level as possible.
  3. Lights and other utilities being on with nobody around
    • IoT Solution: An array of sensor and Photons working to process the presence of people within a room. If no people are detected, then the lights can be dimmed or even turned off.
  4. Impatient Patients
    • IoT Solution: Give each doctor/nurse a Photon (potentially in their pockets) and use existing AP’s and Internal Positioning Systems to triangulate position and make impatient patients somewhat calmer.
  5. Smoke Detectors that GO OFF FOR NO REASON
    • IoT Solution: Photons scattered throughout the buildings whose readings of temperate and other metrics also affects the decision for the sprinklers and alarms, etc. to go off. The photons and smoke detectors should be weighted equally, but fire alarms should override both and immediately call the system into action.
  6. Inaccurate Diagnoses that lead to billions of wasted dollars on treatments for aliments that aren’t even the problem.
  7. Bathrooms. Even in the most hygienic of places, such as hospitals, the bathrooms with inevitably be dirty. That’s quite an issue for both healthcare professionals and patients alike…
  8. Malfunctioning technology that may not seem like much… but over time adds up in lost time teaching and doing hospital things. As we grow more and more dependent on technology, we will also learn its flaws.
  9. Cybersecurity. Though applicable to almost any field, education and healthcare are probably the most in need of good security against breaches, beit hacking or ransomware, as they often contain very sensitive records that the institution needs to function and should keep private.
  10. Speaking of those records… more efficient record keeping. No doubt there are already decent databases in use today. But there are some cases, such as transferring schools or hospitals, which should automatically transfer records, among other possible improvements.

Sunday, October 2, 2016

The System Diagram & Identifying the 5 System Components

Consider the original Edison electric bulb, as well as the Philips Hue bulb. Diagram the 5 system elements in each case. What additional system(s) has to be present for the Philips Hue to work? Can you diagram it?

For the original Edison electric bulb, the system diagram is quite simple for the bulb itself. It requires electricity of some sort, so its source is from a power plant. The distribution system is the electric grid, and the packaged payload is the electricity itself. The tools are the bulbs, and the control system is the light switches that we all know and love.

The Philips Hue, however, requires a few additional components in addition to the simple electricity delivery that the classic light bulb needs to function. It also requires the information as an Internet of Things device to function properly. In that case, the source is the user's phone (typically), the distribution system is the internet, and the packaged payloads are the bytes that comprise the information needed to control the Hue. The tools are the Hues, and the control system is once again they user's phone.

Consider self-driving cars. Identify (& diagram) the 5 system elements needed for self-driving cars to work?

Self-driving cars are a curious case; a long sought-after technology that always seems to be so close, yet so far at the same time. In any case, the thing that allows humans to drive cars is one thing: our ability to take in and process information. The same must be possible for self-driving cars. Instead of organs and a brain, however, self-driving cars have sensors and most likely some controller within the car itself. The packaged payload is once again information, this time in the form of bits and bytes instead of electrical impulses. The tools are again the control system of the car, and the control system is the computer within the car.

If you have a choice between making very smart or somewhat less-dumb roadways, which would you choose & why? How would you improve safety?

Of course, one car is not an island. Self-driving cars will of course be required to work with each other in order to make sure that they coordinate and do not crash into anything. Roads may be one of the keys to providing that information, though there are also other ways to do it. From a certain standpoint, making roads as smart as possible seems like the best thing to do. They could tell cars what condition they are in so as to promote slower driving in harsh conditions, as well as communicate the presence of non-vehicular presences on the roads like deer. They could also provide quite precise location detection of every car on the road. All this technology, however, comes at quite the price, int both upfront and maintenance costs, and most of them can be supplied much more easily with existing infrastructure. GPS can allow for location tracking. Meteorological services can be correlated with road conditions, and unexpected presences in the road are being dealt with even now, with proximity detection and automatic braking. So, instead of making roads wicked smaht, we can make the cars driving on them smarter and tap into existing technologies to get the same information those roads could provide.

As for safety, that is where coordination comes in. Humans are notoriously poor drivers because we are not coordinated at all; the root cause of traffic jams and accidents. If every single car on the road was self-driving and communicating, then those issues would cease to exist. Traffic jams would no longer be an issue, and accidents would be cut to 0: so long as every car is working together.

Now, how would they communicate and work together? Obviously, having one, or even several centralized computers probably won't work. The data from billions of cars, all having to be polled in real-time may be too much to handle, and if a supercomputer capable of doing such a thing did exist, imagine the cost of running it! Also, imagine a scenario where a deer suddenly leaps in front of a car in Russia. If the car has to transmit that data to a server elsewhere, then wait for the server to make a decision, it may be too late. Time is measured in milliseconds; too long a time waiting for such decisions could cost lives.

Instead, localize the communication. Have each car be powerful enough to make decisions on their own, and communicate that decision to other cars within a local radius. This dramatically cuts down on stress on a handful of supercomputers as well as saves precious time in the process. Even if that one car that brakes suddenly to avoid the deer is being trailed close behind by another, the first car can transmit that data to the one behind it, and so on if necessary. The trailing cars can then brake along with the first one.

Of course, that's not to say that there shouldn't be a few centralized sources of communication. They are still useful for sending notifications to every single car necessary, which may become necessary in cases of national emergency,

Identify the Packaged Payload in (B). How do you make money if you’re Ford? How do you make money if you’re Google? Is the packaged payload the same?

If you're Ford, the Packaged Payload is simply energy in the form of petrol, just like it always has been. You make money off of sales of the car itself, along with any accessory sensor that may be deemed necessary to retrofit older cars, perhaps.

If you're Google, the Packaged Payload is slightly different. Instead of the fuel used to propel the car, it is the information that the car needs to decide where to go. Perhaps it is unentrepreneurial to say that such a thing should not come with a cost. Perhaps it could, and with most other healthy competitive industries, a consumer should have the choice of which control module to buy to control their car. The separate modules would communicate via a universal protocol agreed to by all companies. Differentiating features would include security, personalizability, and performance. If one really wanted to make money aside from upfront costs, perhaps charge some sort of subscription fee to use the module. As much money as one might make, however, I do believe that something as revolutionary and critical to safety as a self-driving car control module cannot feasibly charge a monthly fee for use, only a one-time fee. This will of course inevitably lead to the ethics of business and practices, but we'll cross that bridge when we get there.

Hopefully soon, as the potential for such technology is limitless.

Tuesday, September 27, 2016

Dominant Design

Dominant Design and the Future

The classic computer mouse.

From humble beginnings as just a set of two wheels, the mouse has gone through several designs to reach what we know today. The majority of it was there by the time the trackball was invented: two buttons, left and right. Nowadays, the main difference is in the tracking method: we use much higher precision lasers than the trackballs could ever hope to achieve, and we've added a scroll wheel. Sometimes, there are extra buttons on the side, which come in handy quite often. Sometimes, the mouse is wireless, sometimes it is wired. And although there exist some other designs that seeks to fulfill the same function, such as Apple's Magic Mouse and the Ergonomic mice that promise better form, I personally have never seen a single one of either in everyday use.

How you connect your peripherals to a computer.

The Universal Serial Bus interface has certainly earned its name. Before it came a time of much chaos, with printers, mice, keyboards, and monitors each having their own connection standards. External drives had quite the selection of possible connections as well, from eSata to FireWire. Sure, all of these still exist in some capacity today. One might need a PS/2 Mouse/Keyboard to access the BIOS before the computer can load USB drivers. And if you're one of the people with the first gen iPod classic, you can thank Steve Jobs for using FireWire. But we've seen quite the jump in USB recently. From a maximum transfer speeds of 480 mb/s with 2.0 to 5 gb/s with 3.0, and now 10 gb/s with 3.1, the technology is advancing quite rapidly. Printers are now almost exclusively USB enabled, as are mice, keyboards, and external drives. And with the advent of Thunderbolt 3 via USB 3.1 type C, we're even seeing power, video, and even external graphics processing all over this one tiny little cable. It took a while, and it still has a ways to go, but Universal is right.

Phones.

Yes, phones in general. Landlines have gone the way of the dinosaurs, at least for the average household here. As for mobile, we've gone through so many different iterations of designs, from the bulky and somewhat impractical cellphones of the past to the flipphones that everyone loves to make fun of, and even phones with pullout keyboards. But now, we're left with just one design: A touch screen, few physical buttons to interact with, and a sleek design that many would never have thought possible in the past.

The mattress you sleep on.

Sure, there might be different alternatives to what you put inside it. Springs, water, synthetic foams, all that stuff. But on the outside, almost every mattress is the same: a really, really heavy rectangular prism.

The shirt you're wearing.

Though it may be partly due to fashion changing, we've seen several different designs, from the loincloth to the toga all lose out to the simplicity of the t-shirt.

Designs Currently in Education and Healthcare

  • Smartboard: From the chalkboard to the whiteboard, and finally the smartboard. In addition, it also replaces those slide projectors that were so finicky to get working.
  • Flipped Classroom: This one's quite the innovation, I believe. instead of teaching every student the same way, then having them practice outside of class, reverse it. Change it up. Have students study outside of class, which gives them the freedom to do it however they learn best, and have them practice and ask for additional guidance in class. Perhaps this is the best way to implement Common Core.
  • Big Data: As the world inevitably grows more connected, more and more data is going to exist, whether we like it or not. Healthcare has one of the fastest growths of metadata out of any industry, and it's not slowing down. More data does have some privacy implications, but ultimately it is a boon to those in healthcare.
  • The Coming of AI: With all that data comes the responsibility to process it, however. Artificial Intelligence is definitely capable of doing just that. The biggest player is IBM's Watson, who claims to be capable of helping every position within the healthcare industry and beyond, something I am quite confident in.

So what's going to be the next big thing?

Education: Flipped Classroom
Healthcare: AI

10+10... Kind of?

Lighting is hard.
This is the 10+10 sketching method, or at least my take on it.
How do you accurately control a computer's sleep state?

This first page deals with broad ideas:
This second page deals with one concept that I decided to  zero in on: An internal positioning system, akin to GPS.
 Finally, another focal point: triangulation of position using WiFi/Bluetooth.

Monday, September 26, 2016

Reading + Reflections on the iPod introduction video from 2001

Entrepreneurial Design, or Philosophy?

I never signed up for Philosophy, yet here I am doing somewhat philosophical stuff. I guess I don't have much of a choice.

The Nature of the Beast:

It's important to keep in mind the truth about the field of innovation and entrepreneurial design. For every success story, there are hundreds, or even thousands of failures. History only remembers the successful ones. For those who can't quite make it, it's quite the opposite. No matter how many articles are written and press releases held for them, they are destined to fail. Perhaps due to mismanagement of the project; perhaps due to - no fault of their own - being simply glossed over and forgotten. In a few decades, we might see a few of these still surviving and thriving. But the final destination for the vast majority of these is far from million or billion dollar valuations.

Playing the Cards:

Ideas are a Dime a Dozen; Execution is Key:

In this startup world of ours where anyone can get funding (beit from Kickstarter, its shady counterpart Indiegogo, or the plethora of sites that somehow manage to be even shadier), ideas are worth quite little. There is no such thing as the million-dollar idea anymore. In a world with over 7 billion people, chances are there's someone else who's got the same idea you do. The trick, then, lies in how you see that idea through. It doesn't matter if you've got the idea for solving world hunger, global warming, and P=NP all at once if you can't execute it. Need any evidence? Basically any episode of Shark Tank will do just fine. There's almost always someone who has mismanaged an idea badly enough that Kevin O'Leary tears them apart for it, and rightfully so. Completely viable and marketable ideas have been flushed down the drain due to poor execution. Shame.

Competition, Competition, Competition:

Making execution all the more important is the fact that competition exists. With ideas out in the open thanks to the internet, there's little chance that someone else isn't doing what you are already. The free market, then, is what decides which products float, and which ones sink. Take package delivery via drone, for example. Amazon has promised its new "Prime Air" service as "a future delivery system from Amazon designed to safely get packages to customers in 30 minutes or less," a lofty premise that nonetheless seems to be getting more real every day thanks to advances in technology. But of course, they're not the only ones exploring that market. Mercedes-Benz and Matternet are now also in the mix, working together to achieve something quite similar, and there are bound to be countless other companies scattered throughout. The competition is intense, and everyone wants a piece of the pie. Who eventually gets that comes down to consumer preference, of course, but there are several factors that directly correlate to that:
  • Marketing
  • Form
  • Function
  • Release Date
These factors all play quite an important role in products and the market. It does not matter how good you product looks and acts if nobody knows what it is, or there are already plenty of similar products on the market. At the same time, releasing an incomplete product early while spending important R&D dollars on marketing instead leads to an underwhelming product, disappointed customers, and a tarnished reputation (*ahem* No Man's Sky, though there are plenty more examples out there). It becomes a delicate balancing act, one which startups may not be able to sustain on their own. But with startups at the heart of innovation due to their "nothing to lose; everything to gain" mentality, it may well be worth the risk. For those that don't quite make it, well, in a few decades, the world will have forgotten entirely of their existence.

The Steve Jobs Way:

Few can deny that Steve Jobs was a legendary innovator and businessman. Even being kicked out of his own company didn't phase him, and he made his eventual triumphant return. Just what made Jobs such a brilliant orator, though? What did he do right?
Let's examine his introduction of the first generation iPod. Such a product was completely new, and was given a target market of everybody, because it's "A part of everyone's life." And Jobs does this quite well. He introduces the market, and the fact that there is no market leader. He mentions the digital music revolution, and how he believes that Apple will lead it.
He gives several alternatives to the iPod, and dissects exactly why each alternative cannot stand up to the iPod in terms of price/song, essentially telling the consumer in the most objective way possible that the iPod is the best way to get portable music.
As if that wasn't enough, he seems to never run out of ways in which the iPod and all the other products of the Apple ecosystem it's built around are better than anything else. Automatic integration with iTunes. Comparability with iMac. Long battery life. Fast file transfers. And all that in the contained in a device the size of a deck of cards.
For the time, this device truly was revolutionary. And Steve Jobs did quite a good job telling the world exactly why it was.

Sorry I had to.