Monday, April 26, 2010

Nature Of Code Final: Game Of Lifetris

As I originally outlined in my proposal, for my Nature Of Code final I decided to create an implementation of Conway's game of life, and combine it with the classic computer game Tetris.  The result was one that, while not terribly fun to play, actually does go a long way in demonstrating that the game of life is more complex than it appears, and despite its simple rules, offers of a large degree of randomness.

I started by trying to discern the basics of the game, and came to the conclusion that I would need to start with some pieces/organisms already on the board.  The game of life tends towards decimating populations, so having no starting population lends itself to a quickly emptied (and boring) board.  Moreover, given the game's natural tendencies, I decided that I would make the aim of the game to allow pieces to live, rather than to destroy them (as was true in the original tetris).

I took this implementation and coded a simple version that was in black and white and played out a generation of the game of life after each tetris piece was dropped.  Once this was complete, I noticed a major problem:  Despite my hopes for the contrary, there's little way to discern what will happen in the next round of the game.

To work my way around this, I next tried implementing a "future" system, where the game would calculate the results of your current piece placement ahead of time, and attempt to show them to you in a translucent clear projection on top of the current board.  The problem with this is that, while interesting, it's still extremely difficult to tell whether you're creating or destroying pieces.

Continuing on my quest for at least some level of usability, I reached for one final implementation.  This time I maintained the "future" board concept, but also colored the tiles more strategically.  This time I made tiles that stayed alive black, tiles that died red, and tiles that were born green.  In this way, it's easy to try and maximize green and minimize red as you place your piece.

Once the interface was intuitive, I added a scoring system based on how many organisms were alive after each turn, and played the game a bit.  The unfortunate result is that it's not very fun.  While tetris allows you to create strategies and begin to intuit moves, the game of life is simply too random to be able to do anything but try and pick correct placement on a turn by turn basis.

Despite this fact, I think there could still be room for using the game of life mechanism to make for a fun game implementation.  Some cases that still seem to hold water did come to mind:  a game where there was a desire to finish with a very specific number of organisms, a game where all the organisms needed to be eliminated in a certain number of turns, or basically any game where the strategy was less about time and movement, and more about specific placement.

In short, the game of life is too random of an algorithm to function desirably in a time and movement based game.  I believe it would hold up far better under slower, ore strategic circumstances, and that tetris is certainly not.

Thursday, April 1, 2010

Sound And The City: Grid Music And Notation

Over the past few weeks, I've taken the idea put forth with orchestraSeating, and modified it significantly.  The modifications have been an effort to reduce the deployment overhead, increase the quality of musical delivery, and make for a musical system that was less spatially specific.  The result is a new version of the installation that I call, for reasons that will become obvious, Grid Music.

orchestraSeating was built around the premise of physical sensors, in a specific site, playing back multi-tracked versions of classical orchestra music.  While this premise is interesting, it is lacking on a number of fronts.  For one, it begs for a site-specific, resource-heavy installation (for example, the cafe at Alice Tully hall, above).  For another, it constrains the piece to the domain of classical music, and therefore requires that the installation and the piece has some level of synchronicity.

A grid overlaid on the Alice Tully seating plan

By contrast, gridMusic uses a grid overlaid on a public space to fuel a generative music engine.  The engine uses an overhead camera in concert with the grid to monitor activity, and then follow a set of rules related to that activity.  The rules are "activated" by movement within the space, and once activated, they cause playback of recorded clips, which yields the generative composition.

Much like Terry Riley's "In C", there are a set of clips (in this case, 20) that the algorithm has to choose from.  Also similar to Riley, the parts must be played in order.  However, the manner in which they are selected is based not on the personal preferences of the players, but by movement within the grid.  When movement is detected in a square of the grid, that square begins playback of Part 1 will continue looping until movement ceases in that square of the grid, and then begins again.  When movement restarts in a given square of the grid, that square is advanced to the next part.

This set of rules allows for easy visualization and state of the piece.  In other words: notation.  The squares that are active are colored, and indicate which clips are currently playing.  Every time any square on the grid changes, a new file representing the grid is saved, complete with a timestamp.  With the traversal of the various grids and their associated time stamps, one can easily discern the composition that was played back at the site for a given performance.

Because of the visually pleasing nature of the grid, it would not be unheard of to involve the grid in some way at the site.  This could be done as a projection or display on a monitor, and would perhaps help to invite the participation of individuals as they acted in the space.

As activity increased, the grid would become progressively more active, with colors varying and changing as per the algorithm's specification.  This would create a lively, interactive visual that would compliment the audio portion of the installation.  Moreover, if the composition were broadcast live to the web, it could be similarly accompanied by the progression of the grids.

As each square reached it's final "switch" from the black "part", it would return to it's original white, inactive state.  It would remain this way until all the squares came to rest, at which point after a duration of 5 minutes down time, the piece would begin again.

Tuesday, March 30, 2010

Pixel By Pixel: Multiple Perspectives In 2D

 This week in Pixel By Pixel, we were asked to take various painting styles discussed in class to inspire an interactive piece.  As such, I decided to leverage last week's work in image banding, and attempt to emulate Picasso's attempts at forcing multiple perspectives into a single 2D plane.

Picasso's "Portrait Of Dora Maar"
 
In order to accomplish this, I modified my program to accept two camera inputs, one for each perspective.  I then added a keyboard interaction to allow the user to modify the banding resolution.  The result is a program that can look at two perspectives, and divide them amongst image bands accordingly.

 In order to achieve an effect similar to Picasso's, it's necessary that the user carefully align their camera angles so that the perspective in question is mutually centered in each camera.  This will ensure the unity necessary in the merging of the 2D planes into one.  In the image above, I've centered myself in the frame of both camera perspectives.

As the user interacts with the banding resolution, a variety of effects occur.  Here, we can see that at low banding resolutions, there are very obvious differentiations between the two images, resulting in recognizable pieces of each image, and a feeling of displacement for those pieces.

As the banding resolution is increased, the feeling of displacement is reduced.  Instead the image yields more of a feeling of simultaneous existence, with both images occupying the same space.  This is due to the increased resolution revealing a more evenly distributed rendering of each perspective, despite using exactly the same number of pixels.  At these higher resolutions, there is less necessity for the object or scene to be centered, as the increased clarity allows for both perspectives to be seen regardless of positioning. (This effect can be seen at the top of this entry)

Because the image banding controls are distinct, they can also be used to combine the two effects.  This can result in a striped pattern that allows for some of the best of both worlds.  The increased resolution in one dimension increases clarity, while the lower resolution in the other dimension allows for the feeling of displacement, and the clear existence of two perspectives.

This two camera application of image banding and perspective is clearly in the early stages.  Most notably, a fixed and aligned camera configuration might yield more consistent images and interaction.  Additionally, a much larger number of cameras could be used, resulting in further displacement and perspective collisions.  For example, four cameras aligned on an x-y axis could result in a 2D image that showed perspective on an object or scene from all sides.  Alternatively, motorized camera mounts could allow the user to control perspective, and thus control the displacement in the 2D image they were creating.

Nature Of Code: Final Project Proposal, Tetris and Conway's Game Of Life


Our Nature of Code final project is about as abstract as you can get, with the option to leverage just about any and all sides of the various phenomena.  What's more, the visualization (or lack thereof) is also completely open ended.  In short, we were asked to look at the huge amount of material covered this semester, and get inspired.

For me, that meant childhood video games - Specifically, Tetris.  While reading this article on Conway's Game Of Life, I was struck by how similar the cellular shapes were to Tetris pieces.  This got me thinking as to whether there might be a way to combine the two into a video game that yielded novel game play, driven by cellular automata.

While I'm still fleshing out the idea, you can check out my thoughts so far in the proposal below.

ppt: Nature of Code Final Project Proposal - Tetris and Conway's Game Of Life

Tuesday, March 23, 2010

Pixel By Pixel: Pixel Transformation


This week in Pixel By Pixel, we were asked to delve into the world of pixel transformation.  Put simply, taking the individual pixels in an image, and processing them to change location in the grid. The goal was a result that was "dynamic and interactive".

Part 1: Reflected Pixels
I began the exercise by subdividing both horizontal and vertical pixels by two, and reflecting the pixels to the remaining three quadrants.  The result was a program that is both entertaining and visually dynamic.  As can be seen from the images below, it allows for (particularly with facial anatomy) imagery that instinctually feels deformed or distorted.  However, it can feel pretty simplistic, along the lines of a house of mirrors.



Part 2: Banding
While experimenting with modifying the reflection algorithm, I modified the factor by which the horizontal reflector was divided.  This resulted in the banding pattern seen above.  Observing the pattern caused me think that intentional banding could create compelling imagery, based around the concepts of repetition, patterns, and blending.


Part 3: Banding Grids
I decided to pursue the banding algorithm, and to do so in both the vertical and horizontal dimensions.  I achieved the desired result by grabbing a section of pixels, and then reapplying it using modulo math.  Since the modulo resets every time its argument is reached, the algorithm would then start anew and redraw the desired band.  The result is a "banding" grid of the desired area, as can be seen above.



Part 4: Abstract Banding Grids And Resolution
While a low resolution banding grid can be more compelling in terms of subject recognition, a higher resolution grid can actually yield more unusual and abstract patterns, particularly when it comes to subject blending, and perceived appearance of the subject.


Part 5: Subject Recognition In High Resolution Banding Grids
Perhaps the most counter intuitive part about the high resolution grids is that despite their complexity and abstraction (see above), they are actually simply a grid of a repeated image.  What this means is that if great care is taken, the subject can actually still be recognized, as can be seen from the recognizable lettering in the image below.


The idea of banding grids manages to take a variable chunk of an existing screen, and repurpose it as a tool to build patterns that can be simultaneously abstracted and recognizable.  What's more, it takes a group of pixels and repurposes them on a macro level such that the group itself becomes the implementation of the pixel.  Further developments in this area might include using the mouse to move the selected band, implementing a constant-movement band that iterated across the image, or intelligently selecting bands of the implemented resolution to recreate the image itself, but in recursive bands that had in fact been sourced by the image.

Thursday, March 4, 2010

Sound And The City: Orchestra Seating Mk. 2

This week in Sound And The City finds us re-presenting our final project proposals, this time with the addition of an exterior critique.  As such, I've modified my initial proposal to include a specific site, and a more specific implementation plan.  I've also implemented a rudimentary demo in Logic Pro to simulate the effect of the installation.

ppt: orchestraSeating mk. 2

Wednesday, March 3, 2010

Pixel By Pixel: Experiments In Thermal Imaging


This week in Pixel By Pixel we were asked to embark on a project inspired by light phenomena observed on our recent trip to the New York Hall of Science.  As such, I decided to take the museum's thermal camera a step further, and actually create a thermal pixel grid, seen above.  The grid (in theory) would use peltier junctions behind an insulated grid of copper tiles, which one could then heat individually using a microcontroller.

The paper "canvas" frame

Unfortunately, the project had challenges from the outset.  The first was in the initial concept itself:  Once I started testing the copper tiles with the peltier junctions, I discovered that something about the copper tiles, probably reflectivity, caused them to be invisible to the thermal camera.  I tried aluminum as an alternative, and had the same effect.  However, when I used paper, all seemed well, so that was my new solution.

 The four transistors of the circuit

I created a frame for the paper canvas, and then endeavored to complete the full circuit required to power the four peltier junctions.  This was achieved by creating a circuit consisting of four transistors and an external power source.  The power source was required to adequately heat the junctions, and the transistors were used to control the power source's path to each junction.

The fully wired canvas/box

Once the circuit was complete, I mounted it (and the four peltier junctions) to the back of the canvas/box, thus enabling the entire unit to stand largely on its own, with the only outgoing connections being to a power source and the microcontroller.


Once this was complete, I began to test the unit using test patterns, and then the true problem arose:  the peltier junctions function smoothly for 20-30 seconds, but as they begin to retain heat, they begin to lose their ability to turn "off" as pixels, and they simply become a grid of pixels stuck in the "on" position. What's more, the thermal camera itself creates recurring (and irritating) scan lines.  These effects can be seen (along with ITP in-lab antics) in the video grab from the thermal camera above. 

While the end result was ultimately a disappointment, the endeavor was not.  The idea still seems feasible, even if peltier junctions are maybe not the appropriate solution.  What's more, before the pixels fail, one gets a general conception of the idea trying to be achieved, and it's actually quite visually pleasing.  What's more, I feel that the addition of a color camera could further the project to an even greater extent.  Put simply: there are still many aspects to explore.

Thursday, February 25, 2010

Little Computers: Fun With Drawing


This week for Little Computers, we were asked to use iPhone drawing techniques to create an app that utilized them in an interesting or animated way.  Given the weather, I decided to use precipitation for inspiration, and made an app based around rain.  

The app starts out (as above) with some clouds and a low lying body of water.  As you shake your iPhone, the accelerometer detects the movement, and the app generates water drops in response.  As the drops reach the body of water, it rises accordingly.

Once the body of water has risen to the edge of the clouds, it detects this, and holds off creating rain so it can have a moment to recede.  After that, you can start all over again!

The shapes in the application were rendered in Quartz, the 2D rendering engine native to Mac OS.

Sound And The City: New Japanese Underground

This week for Sound and the City, Mark Triant, Igal Nassima, and I were asked to present on some aspect of sonic history.  We collectively decided upon Japanese noise music, sometimes known as the "New Japanese Underground".  Below is our presentation (complete with graphics and sound) to the class.  Enjoy!

pdf: New Japanese Underground Presentation

Monday, February 22, 2010

Nature Of Code: Midterm Proposal

For the Nature of Code midterm, we were asked to do the following:  

Develop a proposal and a prototype for a "midterm" project. The scope of the project can be quite large (trial idea for a final, for example), however, you will not be expected to implement the entire project. For the proposal, include a description, relevant links, and a quick Processing sketch of the first step towards the idea. Link your proposal from the wiki. Next week, we will look at a selection of proposals and then on Mar 2/3, the results for all midterm projects will be presented.

As such, I've decided to continue my work on modeling popcorn using the Box2D library.  While the original attempt was moderately successful, there are a number of aspects that I'd like to modify and enhance so that the simulation will be more realistic.  Specifically, they are to:
  • Enhance the geometry of the popcorn so that the kernels and popped corns are not uniform in shape and size.
  • Work with density variables to create more realistic behavior as the corn pops and expands out of the kettle.
  •  Work with kernel placement and velocity so that the kernels don't breach the kettle walls upon popping.
  • Allow for user interaction and variability with the simulation, including: amount of popcorn, turning the kettle on an off,
  • Create more realistic visualizations to associate with the simulation as a whole.
These changes will also serve to further familiarize me with the Box2d library, and its ramifications for geometry and particle systems in a modeling context.  The project will not be the prototype for a final project, but rather the final iteration of an earlier project before I embark on a final.

Tuesday, February 16, 2010

Nature Of Code: Popcorn Modeling

This week in Nature Of Code, we were asked to use the Box2D library to implement a model of some real world phenomena.  Box2D is a physics system that manages the physics of a given system, which you can add various physical "bodies" to.  This allows you to create a realistic physics model for a given 2D system with very little coding overhead.  For whatever reason, when I heard this, my mind immediately went to popcorn.

For a smaller number (100) of kernels, my model (with a small amount of tweaking) actually works quite well.  The kernels pop somewhat naturally, and the system generally handles the physics of the situation how you would expect it might:  just like a real popcorn popper!  You can see the effect of 100 kernels in the first two illustrations of this post.


However, from there I started adding more kernels, and got into some trouble.  At 150 kernels (above) the physics started behaving a bit erratically, and areas with a lot of popping density would cause kernels to move through solid objects.

When I increased the number of kernels even further, to 300, this behavior became almost ubiquitous in the system.  Kernels were popping out of the kettle left and right, and the walls of the kettle seemed almost meaningless.

I have yet to discover what the cause of this behavior is, but my guess is that it's an inability of Box2D to handle the sudden transition from kernel to popped kernel.  A few ideas I've had include increasing the time of the "pop" from one step of the physics engine to say, 3 or 4.  This might allow the engine to handle the change more intuitively.


One of the challenges of Box 2D is that it has no graphical output, so everything you draw is based upon a Box2D object, but isn't actually a Box2D object.  What this means is that errors in one's graphics code can appear to be Box2D errors, when in fact they are coding errors on the users end.  This is unquestionably one of the downsides of using a "black box" engine, but hardly enough of a problem to avoid using such a versatile tool.


source: Popcorn Model

Thursday, February 11, 2010

Sound And The City: Final Project Proposal - orchestraSeating

This week for Sound and The City, we presented our proposals for final projects. My project, orchestraSeating, is an installation piece that interactively deconstructs classical music scores. The Power Point presentation for my proposal is can be found below.


ppt: orchestraSeating Proposal

Tuesday, February 9, 2010

Nature Of Code: Flower Modeling Mk. 3

The image above is a screen grab of the third revision of my flower-based physics model, last mentioned here.  Since then, the model has gone through a number of iterations.  The first modified the graphics to be slightly more refined, and more successfully integrated a wind force, as well as a flower "wobble".  The initial issues I had with the wind being too uniform were resolved by putting a limit on the force vector, as opposed to the resulting velocity vector.  The velocity limit had been causing all of the flowers to share velocity and direction.  I intriduced the "wobble" in an effort to give each flower its own movement, in spite of the shared wind vectors.

This revision, the third, takes the second revision and adds the concept of thermals.  The thermals can be seen in the regions defined by white lines above.  These thermals provide a third, largely upward, force that is defined by the developer.  When blooms cross them, the addition of the thermals can result in spontaneous upward movement, ostensibly due to air current resulting in difference from temperature.  From here, I think the next logical step is transferring the wind force from a single shared force screen-wide into an array of wind vectors, varying based on screen location.

Code:  Flower Modeling Mk. 3

Thursday, February 4, 2010

Sound And The City: Soundwalk - "Imaginary Commute"


This week for Sound and The City, we were asked to create a "soundwalk", whereby we would make a field recording of a walk through a given area, and narrate along.  Since I live about 10 blocks from Penn Station, I decided to do an "imaginary commute" as though my home were an office, and I needed to catch a train at Penn to get home.  I left my front door around five in the evening, and the result is below.

audio: Soundwalk - Imaginary Commute

Saturday, January 30, 2010

Little Computers: Some Interesting Flags For gcc

This week in Little Computers, I was asked to put together a presentation on some "interesting flags" to gcc.  You can check out the pdf of the presentation below!

pdf: "Some Interesting Flags For gcc"

Thursday, January 28, 2010

Sound And The City: Deep Listening


This week for Sound And The City we were asked to do a bit of "deep listening".  This essentially consists of spending twenty minutes in a given environment, focusing entirely on the environmental sound in that area.  This includes having your eyes closed, doing nothing else, and generally paying as much attention as possible.  After the twenty minutes are up, one takes notes (of any kind they desire) on the experience, and the result is a "deep listening" log.

Daniel (course instructor) asked us in class to name a place we loved in New York - I chose the west side highway park, where I run daily.  The trick here was that this then became the site of our deep listening experience.  My log from the experience is pictured above, while an mp3 of the same time period is below.  Enjoy!

mp3: Deep Listening, West Side Highway, New York