Tuesday, March 30, 2010

Pixel By Pixel: Multiple Perspectives In 2D

 This week in Pixel By Pixel, we were asked to take various painting styles discussed in class to inspire an interactive piece.  As such, I decided to leverage last week's work in image banding, and attempt to emulate Picasso's attempts at forcing multiple perspectives into a single 2D plane.

Picasso's "Portrait Of Dora Maar"
 
In order to accomplish this, I modified my program to accept two camera inputs, one for each perspective.  I then added a keyboard interaction to allow the user to modify the banding resolution.  The result is a program that can look at two perspectives, and divide them amongst image bands accordingly.

 In order to achieve an effect similar to Picasso's, it's necessary that the user carefully align their camera angles so that the perspective in question is mutually centered in each camera.  This will ensure the unity necessary in the merging of the 2D planes into one.  In the image above, I've centered myself in the frame of both camera perspectives.

As the user interacts with the banding resolution, a variety of effects occur.  Here, we can see that at low banding resolutions, there are very obvious differentiations between the two images, resulting in recognizable pieces of each image, and a feeling of displacement for those pieces.

As the banding resolution is increased, the feeling of displacement is reduced.  Instead the image yields more of a feeling of simultaneous existence, with both images occupying the same space.  This is due to the increased resolution revealing a more evenly distributed rendering of each perspective, despite using exactly the same number of pixels.  At these higher resolutions, there is less necessity for the object or scene to be centered, as the increased clarity allows for both perspectives to be seen regardless of positioning. (This effect can be seen at the top of this entry)

Because the image banding controls are distinct, they can also be used to combine the two effects.  This can result in a striped pattern that allows for some of the best of both worlds.  The increased resolution in one dimension increases clarity, while the lower resolution in the other dimension allows for the feeling of displacement, and the clear existence of two perspectives.

This two camera application of image banding and perspective is clearly in the early stages.  Most notably, a fixed and aligned camera configuration might yield more consistent images and interaction.  Additionally, a much larger number of cameras could be used, resulting in further displacement and perspective collisions.  For example, four cameras aligned on an x-y axis could result in a 2D image that showed perspective on an object or scene from all sides.  Alternatively, motorized camera mounts could allow the user to control perspective, and thus control the displacement in the 2D image they were creating.

Nature Of Code: Final Project Proposal, Tetris and Conway's Game Of Life


Our Nature of Code final project is about as abstract as you can get, with the option to leverage just about any and all sides of the various phenomena.  What's more, the visualization (or lack thereof) is also completely open ended.  In short, we were asked to look at the huge amount of material covered this semester, and get inspired.

For me, that meant childhood video games - Specifically, Tetris.  While reading this article on Conway's Game Of Life, I was struck by how similar the cellular shapes were to Tetris pieces.  This got me thinking as to whether there might be a way to combine the two into a video game that yielded novel game play, driven by cellular automata.

While I'm still fleshing out the idea, you can check out my thoughts so far in the proposal below.

ppt: Nature of Code Final Project Proposal - Tetris and Conway's Game Of Life

Tuesday, March 23, 2010

Pixel By Pixel: Pixel Transformation


This week in Pixel By Pixel, we were asked to delve into the world of pixel transformation.  Put simply, taking the individual pixels in an image, and processing them to change location in the grid. The goal was a result that was "dynamic and interactive".

Part 1: Reflected Pixels
I began the exercise by subdividing both horizontal and vertical pixels by two, and reflecting the pixels to the remaining three quadrants.  The result was a program that is both entertaining and visually dynamic.  As can be seen from the images below, it allows for (particularly with facial anatomy) imagery that instinctually feels deformed or distorted.  However, it can feel pretty simplistic, along the lines of a house of mirrors.



Part 2: Banding
While experimenting with modifying the reflection algorithm, I modified the factor by which the horizontal reflector was divided.  This resulted in the banding pattern seen above.  Observing the pattern caused me think that intentional banding could create compelling imagery, based around the concepts of repetition, patterns, and blending.


Part 3: Banding Grids
I decided to pursue the banding algorithm, and to do so in both the vertical and horizontal dimensions.  I achieved the desired result by grabbing a section of pixels, and then reapplying it using modulo math.  Since the modulo resets every time its argument is reached, the algorithm would then start anew and redraw the desired band.  The result is a "banding" grid of the desired area, as can be seen above.



Part 4: Abstract Banding Grids And Resolution
While a low resolution banding grid can be more compelling in terms of subject recognition, a higher resolution grid can actually yield more unusual and abstract patterns, particularly when it comes to subject blending, and perceived appearance of the subject.


Part 5: Subject Recognition In High Resolution Banding Grids
Perhaps the most counter intuitive part about the high resolution grids is that despite their complexity and abstraction (see above), they are actually simply a grid of a repeated image.  What this means is that if great care is taken, the subject can actually still be recognized, as can be seen from the recognizable lettering in the image below.


The idea of banding grids manages to take a variable chunk of an existing screen, and repurpose it as a tool to build patterns that can be simultaneously abstracted and recognizable.  What's more, it takes a group of pixels and repurposes them on a macro level such that the group itself becomes the implementation of the pixel.  Further developments in this area might include using the mouse to move the selected band, implementing a constant-movement band that iterated across the image, or intelligently selecting bands of the implemented resolution to recreate the image itself, but in recursive bands that had in fact been sourced by the image.

Thursday, March 4, 2010

Sound And The City: Orchestra Seating Mk. 2

This week in Sound And The City finds us re-presenting our final project proposals, this time with the addition of an exterior critique.  As such, I've modified my initial proposal to include a specific site, and a more specific implementation plan.  I've also implemented a rudimentary demo in Logic Pro to simulate the effect of the installation.

ppt: orchestraSeating mk. 2

Wednesday, March 3, 2010

Pixel By Pixel: Experiments In Thermal Imaging


This week in Pixel By Pixel we were asked to embark on a project inspired by light phenomena observed on our recent trip to the New York Hall of Science.  As such, I decided to take the museum's thermal camera a step further, and actually create a thermal pixel grid, seen above.  The grid (in theory) would use peltier junctions behind an insulated grid of copper tiles, which one could then heat individually using a microcontroller.

The paper "canvas" frame

Unfortunately, the project had challenges from the outset.  The first was in the initial concept itself:  Once I started testing the copper tiles with the peltier junctions, I discovered that something about the copper tiles, probably reflectivity, caused them to be invisible to the thermal camera.  I tried aluminum as an alternative, and had the same effect.  However, when I used paper, all seemed well, so that was my new solution.

 The four transistors of the circuit

I created a frame for the paper canvas, and then endeavored to complete the full circuit required to power the four peltier junctions.  This was achieved by creating a circuit consisting of four transistors and an external power source.  The power source was required to adequately heat the junctions, and the transistors were used to control the power source's path to each junction.

The fully wired canvas/box

Once the circuit was complete, I mounted it (and the four peltier junctions) to the back of the canvas/box, thus enabling the entire unit to stand largely on its own, with the only outgoing connections being to a power source and the microcontroller.


Once this was complete, I began to test the unit using test patterns, and then the true problem arose:  the peltier junctions function smoothly for 20-30 seconds, but as they begin to retain heat, they begin to lose their ability to turn "off" as pixels, and they simply become a grid of pixels stuck in the "on" position. What's more, the thermal camera itself creates recurring (and irritating) scan lines.  These effects can be seen (along with ITP in-lab antics) in the video grab from the thermal camera above. 

While the end result was ultimately a disappointment, the endeavor was not.  The idea still seems feasible, even if peltier junctions are maybe not the appropriate solution.  What's more, before the pixels fail, one gets a general conception of the idea trying to be achieved, and it's actually quite visually pleasing.  What's more, I feel that the addition of a color camera could further the project to an even greater extent.  Put simply: there are still many aspects to explore.