Friday, October 30, 2009

ITP Had A Haunted House

ITP had a Haunted House last night, and I did my best to add to the awesomeness.  Specifically, I borrowed an idea from David Bowie's 1997 stage installation, and decided to project some faces onto amorphous "head" type shapes.  My fellow ITP'ers were kind enough to indulge me with some awesome footage, and the result was some disembodied, creepy looking weirdness!  Thanks to Meredith for the pics...




Wednesday, October 28, 2009

Physical Computing: Reaction To Visual Intelligence


Donald Hoffman's "Visual Intelligence" manages to take a relatively conventional concept (that our brain "tricks" us into perceiving much of what we deem "real"), and illustrates it through the novel venue of amputees.  By using the amputees' sensation of phantom limbs, Hoffman creates a tangible and realistic illustration of the disconnect between the physical and the mental world.

While Hoffman's examples do a great job at illustrating the concepts, the concepts themselves aren't exactly novel.  It's a well known fact of many simple schoolyard tricks that the brain can be easily tricked into misperceiving "reality".  Watching movies, smelling one thing while eating another, combining hot and cold sensations - all of these things can allow us to trick our nervous system into perceiving things that aren't "really" there.

In the end, this fact is obvious, but perhaps overlooked because it is so common:  the best thing we can do moving forward is to try and consider how we are really perceiving the world around us, and enlist this as we make choices in design.

Visualizing Data: Reaction To Karsten Schmidt


Karsten Schmidt describes himself as a "Computational Designer", and the description is apt:  most of his design projects are driven by code and generative algorithms.  In stark contrast to Aaron Koblin, who uses small amounts of data from a wide range of individuals, Schmidt uses programatic iterations that generate similarly unique data sets.

The generative nature of Schmidt's work definitely separates it from Koblin's more "techy" work:  While Koblin's pieces simply use data sets with modern visualizations, Schmidt's generate their own data.  In the end, this results in the pieces (which could potentially be seen as more robotic or inhuman) actually having more in common with Koblin's human created data sets.

This conclusion about Schmidt's work brings to mind a number of interesting questions in the area of data, technology, and intelligence.  Specifically: what is interesting about data, and where are its most interesting sources?  Moreover, if data is real or generated, does that make a difference, and what is it about the presentation that gives it a more human feel?

To my mind, Schmidt's work unquestioningly draws attention to the fact that artificial, generated data can be every bit as human as real data sets, perhaps moreso.  Considering it further, this may be a result of the fact that generative data is "growing" in much the same way a group of humans "grow" a widely dispersed data set.  In the end, perhaps it is the life of the data, rather than its end points, that truly define how it is perceived.

Visualizing Data: Reaction To Aaron Koblin


In looking at Aaron Koblin's work, the pieces seem to be divided into two categories:  those that use amazon's mechanical turk to generate data, and those that simply create visualizations of large data sets.  While the visualizations certainly have their appeal, I have to say that I prefer the amazon mechanical turk projects.

The mechanical turk generated data sets not only provide novel visualizations, but also novel ways of generating the data that led to them.  Seeing how data sets that are created at a micro level are still inexact is an interesting analogy for how larger projects can have an inexact nature to them.  What's more, Koblin's visuals are compelling and unexpected.

By contrast, the visualizations that depend on external data seem to suffer from a forced feel of trying to hard to be futuristic or "different", but in fact ending up being cliche.  The idea of mapping flight patterns or telephone lines has been done a million times, while the "House Of Cards" video is utterly reminiscent of needle pads that fit-form to represent shapes.

Out of all of the pieces, my favorite is probably the sheep market:  the representations are novel and humorous, the data collection interesting, and the representation enjoyable to navigate.  In short, it presents some serious concepts about data generation in the modern world while at the same time giving them a humanistic feel.

If anything, that is the shortcoming in Koblin's less enticing work:  the absence of humanity and a feeling of overly-conscious attempts to be futuristic and technologically advanced.  Part of this reaction to Koblin's work is probably driven by over exposure to faux-futurist imagery, but it's also a result of the fact that Koblin's more human works are simply more novel and easier to relate to.

Tuesday, October 27, 2009

Physical Computing Week Seven Lab: Multiple Serial Output

Building on last week's serial lab, this week took the same principles and applied them to multiple, rather than a single, serial data source.  In this case we took a circuit containing two analog and one digital sensor, and sent the output to a Processing script.


Here's a picture of the circuit - as you can see, there are two analog inputs (the potentiometers) and one digital input (the push button).  As was noted in the lab, this set of inputs represents the same inputs as a typical one button mouse.



As such, the inputs were used to control a circle on screen, with the push button turning the circle on and off.  You can see this control at work in the video above.


After getting the script working with a streaming serial input, we then rewrote the arduino side to wait for a handshake before it started sending data.  Once it received the handshake, it would send only one set of data, until it received a request for another set.  This serial behavior can be seen in the video above.


As mentioned at the top of the lab, the principles put to work here are very similar to the ones from last week's lab, but simply expanded to allow for multiple inputs.  This, in turn, allows us to use the Arduino's serial output in a far more versatile and productive manner.

Tuesday, October 20, 2009

Physical Computing: Real World Technical Observations

This week we were asked to go into the "real world" and observe people interacting with devices, and see how it met with our expectations.  I did just that, but unfortunately had very little to report in terms of results:  Everywhere that I went, people seemed to use devices or interfaces exactly as expected.

I spent some time near an ATM, by the entrances to some buildings, and near some subway metrocard machines.  In all cases, it seemed that the users knew how to use the devices on hand, and simply went through the motions, often almost intuitively.  There are two explanations that I can attribute this two in relation to last week's readings.  

First, it may simply be that people are smarter than the "bad design" critics give them credit for.  Put differently: just because something is poorly designed doesn't mean it's unusable.  It's just that it's sort of a hassle, but that people are adept enough to figure it out.  Take the case of a PC:  I used Windows perfectly well for years.  Now that I use OS X, I'm far happier, but my ability to function hasn't particularly changed.

The second explanation is that it's simply a case of learned behavior:  in a city like New York, people tend to have routines and typical day to day actions.  It may simply be that the people I observed have overcome poor design because they've become so accustomed to it - now they simply function as normal, and envelop the poor design in their routine.

I think the reality is that it's probably some of both:  Very few poor designs are unfathomable, but they could be somewhat challenging at first.  However, after the 100th time using a door or an ATM, very few functional humans are going to keep making the same mistake.  That being said, the reality is that if there were more, better design, it might simply make people's lives easier.  This might in turn lead to happier people, and then - who knows!

Physical Computing Week Six Lab: Serial Output

Through no fault of ITP's, this week's lab was perhaps the most redundant task I've undertaken since starting the program.  This is largely due to the fact that when I started working at Dolby Laboratories, my first sizable task was to write almost the entire software stack for serial communciation on the Dolby Digital Cinema system.  As such, doing so in a basic manner on the arduino/processing platform ended up being pretty trivial.  That being said, it was fun to see it working, and to discover that processing and Dolby use the same serial back end libraries - RXTX!


Because of my strong familiarity with the material, I designed a relatively simple circuit employing a potentiometer to send analog data over the serial port.


Doing a read on this data was also relatively straightforward, allowing it to be piped into a graph in processing, which can be seen above.


Yay!  Serial communication!

Physical Computing: Stupid Pet Trick

With the knowledge acquired thus far, we were enlisted to create a "Stupid Pet Trick" for Physical Computing.  In short, this meant creating a novel, simple device that enlisted out knowledge of analog and digital inputs and outputs in a hopefully entertaining.  In my case, I decided to take the assignment quite literally, and design an interactive cat toy.


The toy consists of two pieces:  a tennis ball on a spring, and a laser pointer mounted on a servo.  Once the program is initialized, the servo is driven by data coming from a flex sensor embedded in the tennis ball.  In this way, the play of one cat (with the tennis ball), will drive the entertainment of another (with the laser pointer).  In short, it's a low maintenance way to have the animals keep each other busy.


I embedded the tennis ball and spring in a wooden platform for stability, and fed a flex sensor up into the spring.  That way, when the spring bent, so did the flex sensor.  I routed a wire conduit out of the wood so that the wires would be hidden, and then sealed them in with a glue gun.

The laser mounted on the servo was a bit more of a "hack job", employing twist ties and a free Flaming Lips laser pointer.  However, in the end it worked out quite well, with the on/off button controlled by another twist tie.

The circuit itself was actually quite simple, needing only one input and one output for the flex sensor and servo, respectively.  What's more, it worked quite nicely with the spring easily driving the servo.  

 Perhaps the only drawback was that as a sketch of a device, I kept the two pieces quite close together.  This resulted in the two components being far too close together to allow for "real world" testing without the two pieces distracting the animals from "their" side of the toy.

Tuesday, October 13, 2009

Visualizing Data: Jan Tschichold, Graphis, And Josef Müller-Brockmann


This week in Visualizing Data, we were asked to look into the work of three designers:  Jan Tschichold, Graphis, and Josef Müller-Brockmann.  All three are Swiss, and based mainly in the mid-20th century.  Moreover, the three also seem to have a unified aesthetic sense that is based firmly in a simple, rigid, and linear form.


While the three certainly create some interesting images, I'm not totally sure what it is that might distinguish Swiss Modernism from Modernism in general.  Moreover, I found these three in particular somewhat difficult to research, and their presence on the web is not quite as prevalent as that of their more famous colleagues.


That being said, it's clear that there is a unity amongst their work:  all three seem to lean towards geometric designs that employ simple fonts, geometric shapes, and stark colors.  What's more, all three are hailed as innovators in this area.  As such, it may be that their innovations and style seem more mundane in today's climate where many of their stylistic choices have become part of the mainstream.

Reactions To "Attractive Things Work Better" and "The Design Of Everyday Things"


This week in Physical Computing we were asked to read two pieces,  both by Don Norman.  The two pieces provided contrast to each other, in that the first, "The Design Of Everyday Things" is a chapter from Norman's original book focusing on usability, while the second is an essay attempting to amend some of his original conclusions and take aesthetics and emotion under consideration.


While both readings are interesting, by in large their conclusions both seem to exist extremely squarely in the realm of common sense.  Perhaps this is because the writings are close to two decades old, and it is certainly true that capable and aesthetically pleasing designs have become much more mainstream in the past ten years.


In the first piece, Norman makes a strong case for utilitarian designs, and the need to consider usability in deployment.  In the second piece, he responds to his own writing, by conceding that aesthetics can have an equal importance to usability when designing the optimal device.


That being said, many of Norman's examples seem trite, or perhaps from another age.  The tasks or devices that he cites as being challenges are simply things that most adults today know how to deal with.  The "blinking clock on the VCR" is a joke rooted in the 80's, and with good reason; it's simply no longer an issue.


Norman's writings may have had poignancy and relevance ten years ago, but today they serve to do something different.  They are illustrative of the advances that have been made in design in the mainstream, and just how prevalent they are.  Here's hoping the trend continues.

Wednesday, October 7, 2009

"Mystery Data CSV" Parsing


 This week for Visualizing Data we were given a "mystery" data set, along with some hints that the set (wink, wink) might contain x-y coordinates. This was an exercise in not only parsing CSV's, but also in taking in data and deriving meaning from it.


A quick parse of the code revealed the x-y coordinates, and quickly demonstrated them to represent a map of the world. The third (data) value was indeterminate, but appeared to represent some sort of variable (population, energy consumption?) associated with more populous areas. When used as a pixel's alpha value, the picture came to quickly represent the well known maps of earth from space at night.


While this was all well and good, it didn't seem to reveal anything about the data, other than that it was exactly what it appeared to be, and that there was world wide trending. However, in an effort to possibly determine slightly more about it, I decided to project the data values into the y axis, and the y axis into z space. This meant that the map was being rendered horizontally, with the height of the map representing data at a given point.


Once this was done, it revealed a few more interesting facts about the data:


1) Despite the "hot spots", there's not a particular are of the world that doesn't have high data points. The points are universally high and low across the breadth of the map.


2) The data appears to be highly stratified across the map, resulting in data "rows" on the y-axis. While I can't be sure why this might be, it seems likely that these "rows" are the result of estimates or rounding employed in the data collection.


Overall, this exercise allowed me to parse CSV's, which is relatively trivial. However, it also forced me to look at the data a little more closely, and in doing so revealed some facts that might have been otherwise overlooked in the 2D model.


Download code by clicking here

Monday, October 5, 2009

Some Thoughts On The Gotham Typeface


 This week in Visualizing Data, we were asked to consider the Gotham typeface and answer a few questions.  Here are my thoughts.

What is the “Gotham” typeface and what is its design inspired by? 
The Gotham typeface is a typeface commissioned by GQ magazine in an attempt to find something new, geometric, and masculine.  The typeface was inspired by the type on the buildings of "old New York", specifically the Port Authority terminal.


What type foundry drew and released Gotham? 
Gotham was drawn and released by the foundry Hoefler & Frere-Jones.

How much does this type foundry charge for the “Gotham Bundle” for a single computer? 
The "Gotham Bundle" sells for $69.00 on the H&F-J site.

How does that make you feel about fonts you’ve pilfered (if you have done so)? 
Frankly, I think that typefaces should be free when used outside of a business context.  The concept of "owning" a typeface even seems silly in a general sense, but is a necessity for foundries to exist.  That being said, as a student and/or creative artist the likelihood that I would ever personally pay for a font is precisely zero.

And briefly, who is Matthew Carter and what did he contribute to digital typography?
Carter is a typographer who began working in the 1960's as an apprentice.  He later (in 1981) went on to start one of the digital-specific foundries, Bitstream.  The significance of his contribution can be most easily summed up as being one of the pioneers of developing fonts that are tuned specifically to have a high degree of readability on a computer screen.

Theme And Variation

In the Theme And Variation assignment, we were required to use only two input values:  a string of text, and a single integer.  We then would use these two input variables, along with only black and white, to render a visualization of the input data.


My idea was to create a grid of 27 circles representing the alphabet, with the last circle being for special characters.  I would then blackout the circles to relay the string to the viewer.  This worked well, but created a somewhat standard uniformity across the grid.  To remedy this, I offset the letters being displayed going from top to bottom.  This allows for patterns that create more of an animation.  Changing the input string changes the look of the animation, while changing the input int changes the frame rate of the rendering.


The version above is actually the second version of the assignment, with some refinements and changes.  Specifically:  I fixed the opening "grid setup" to use an independent frame rate, so that you don't have to wait around when you use a small int.  I also added the "growth" of the rendered circles, so that they no longer appear to flash on and off, which results in a smoother animation.  Finally, I revised the code to be more character-agnostic, since this aided me in animating the circle "growth".


Download Code: Theme and Variation

A Reaction To "Design Meets Disability"

This week's reading for Physical Computing explored the gap between the worlds of design and engineering, focusing specifically on the realm of devices made to assist the disabled.  The article contained a wide range of insights about this largely engineering based industry and its seeming disconnect from the world of design and aesthetics.  However, the portion that resonated with me most strikingly was the greater problem of separation of industries.  

In today's world, it seems that there are often very striking lines drawn between industries, resulting in the undesirable result of poor implementation, reduction and usability.  This is clearly exemplified in the article in relation to devices such as wheel chairs, prostheses, and hearing aids.  However, this is just one example of a gap that needs to be bridged.  Throughout day to day life, we see and use devices and interfaces that are hindered by the fact that they were engineered, and never designed.  

Thankfully, this is a problem that is slowly being addressed by many companies, and hopefully is the reason that many of us are at ITP:  To gain insight, learn from others' experiences, and create devices that encompass a wider array of perspectives and insights.

Physical Computing: Week Four Lab

In week four, Physical Computing turned to using our analog output to work creating productive output products.  This included two labs, creating output in two forms.  The first was a servo lab, which took an analog input and mapped it to the physical movement of a servo.  The second was a tone lab, which took analog input and mapped it to the output of a small speaker.  Both demonstrated that an analog sensor can easily be used to create tangible output effects.


In the first lab, we wired a circuit to include an analog sensor and a servo.  In this photo we can see this circuiy implemented, with a flex sensor in place to control the servo.


Once the circuit was completed, a simple upload of the lab's arduino program yielded the behavior seen in the video above. Either a manual pulsing or the arduino servo library could be used, and the same behavior resulted. In short, the flex sensor's analog output controls the position of the servo.


The second lab followed a similar concept, but instead used the sensor to control tone on a speaker. I chose to use a potentiometer for my control, so that dialing it would control pitch. This circuit can be see above.


Once wired, this circuit also required a small arduino program that would map the analog inputs to an analog output value for the speaker. In the video above, the potentiometer is being moved, and the resulting speaker tone changes in turn.

Thursday, October 1, 2009

A Reaction To "The Bandwidth Of Consciousness"

In taking on this week's reading, "The Bandwidth Of Consciousness", I have to say that I'm a bit flummoxed on a number of counts.  Most notably, the fact that the author believes that the correlation between bit rates and human thought and bandwidth is even a reasonable one.  If I look at an extremely high resolution image (say, 100 MB), am I suddenly consuming data at that "bit rate"?  More imporantly, say the color depth of that image increases so that it's now 200 MB in size.  Am I now consuming even more data?  The prospect is silly, because the fact of the matter is that humans don't consume data as "bits", and an attempt to illustrate otherwise is a lost cause.  For another example, think of audio:  recorded audio has a bit rate that could potentially indicate "bandwidth", but then live audio has an infinite "bit rate".  If I listen to a live violin, is my bandwidth consumption infinite?  However, even more ridiculous is the concept as a whole:  the idea that you can take the input to the human nervous system and in any way quantify it is simply a principle that I don't see as viable.  While it's certainly interesting to see scientists attempting to pursue explanation in this manner, at the end of the day I can't really see that it yields very much value or insight to the reality of the situation.