tag:blogger.com,1999:blog-59851457765295195362024-03-19T06:07:54.304-07:00I Am Jack's Graduate EducationA day in the life of two years at Tisch's Interactive Telecommunications Program.Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.comBlogger52125tag:blogger.com,1999:blog-5985145776529519536.post-65051781846678257252011-03-29T13:19:00.000-07:002011-03-29T13:19:37.689-07:00Spatial Media: draw/erase/space<div style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBdUsFIZRHzw_F0bg4pgIF7RqEfkjFA3IgI_RbWP577drDkzqI7OY8tG67238lbTgjl1DbIw3EfSO172HCv0A6n48IO2M8ju9SwLkmCMsSVcmmu9RJQ1VMxqdS-HqqRoGLy4VuxQhJ2C5I/s1600/Screen+shot+2011-03-29+at+2.49.29+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBdUsFIZRHzw_F0bg4pgIF7RqEfkjFA3IgI_RbWP577drDkzqI7OY8tG67238lbTgjl1DbIw3EfSO172HCv0A6n48IO2M8ju9SwLkmCMsSVcmmu9RJQ1VMxqdS-HqqRoGLy4VuxQhJ2C5I/s320/Screen+shot+2011-03-29+at+2.49.29+PM.png" width="320" /></a><i> </i></div><i>draw/erase/space</i> is a site-specific art piece designed to increase public interaction, while at the same time questioning how art is created and destroyed, and how individuals regard others' creations. Designed to be installed in a public subway, the installation gives multiple users the option to "draw" or erase" within a given space on the subway platform. Their creation or destruction of art is then displayed on the opposite platform on a projected screen.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjnnhzI2CdGuJTcP0VoYU17USQiRjvGGDvDhyphenhyphenYdfvoLqN5vnpNfkPC3QGvC1hhxlrYynYqvB9o5sjr8vMVAV48UQejOGkXpSI7a9DnYU9vv01ktDnz_HiXjUXw08yNWfE2AulCflUAy0wic/s1600/web-rausch.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjnnhzI2CdGuJTcP0VoYU17USQiRjvGGDvDhyphenhyphenYdfvoLqN5vnpNfkPC3QGvC1hhxlrYynYqvB9o5sjr8vMVAV48UQejOGkXpSI7a9DnYU9vv01ktDnz_HiXjUXw08yNWfE2AulCflUAy0wic/s320/web-rausch.jpg" width="273" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGYVN-3M2ELjJQuH6cFYZAZK1pEGrx5qyk_3QbUVvMXtCRmLBYxwJ_3BNFCWrEfujLKa3ciqbHIOLGWdEGumFqTwv7fzo-Zk1Qk87RoxDRn7JUfD1l_1MkjzMUP_ir6B_iyaNId6dZn7tO/s1600/414AEWP9NML._SL500_AA300_.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGYVN-3M2ELjJQuH6cFYZAZK1pEGrx5qyk_3QbUVvMXtCRmLBYxwJ_3BNFCWrEfujLKa3ciqbHIOLGWdEGumFqTwv7fzo-Zk1Qk87RoxDRn7JUfD1l_1MkjzMUP_ir6B_iyaNId6dZn7tO/s1600/414AEWP9NML._SL500_AA300_.jpg" /></a></div><br />
The project draws much of its artistic interpretation from two pieces: Robert Rauschenberg's "Erased deKooning", and Brian Eno & Robert Fripp's "No Pussyfooting". Both of these pieces wrestle with the complexities of creation and collaboration, and whether destruction is actually a form of creation as well. <i>draw/erase/space</i> attempts to embrace this spirit by providing individuals with an environment where they can confront these issues head on, simultaneously other hopeful "artists".<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiH2UF6H2qXpHhRDHyM7yA7Zci37D_6J5NAfFnIFL42NKUFs_VrBGV9PqduIAUqtae8bQtXgmcL0emkxUiJy1tzSDTFJQT8dm3ygcOGTt_xTb0Vzg23F1dMEGvjCEcsmWqs1dFx8ClEaPpt/s1600/_POP1862.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiH2UF6H2qXpHhRDHyM7yA7Zci37D_6J5NAfFnIFL42NKUFs_VrBGV9PqduIAUqtae8bQtXgmcL0emkxUiJy1tzSDTFJQT8dm3ygcOGTt_xTb0Vzg23F1dMEGvjCEcsmWqs1dFx8ClEaPpt/s320/_POP1862.jpg" width="212" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgELmff5utm5HzYZ4oiD6xwlsD0kz0V7bTPlp8X-WlVrGb33eQvNRbDa1oju-VK4buEgnhexaobZGno6xigkiB3QosC5u36UyGUjeuOHTmFdtZECcKu_wo4o5bISn8QQAv_RIi5eLi8qJKP/s1600/floor5.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgELmff5utm5HzYZ4oiD6xwlsD0kz0V7bTPlp8X-WlVrGb33eQvNRbDa1oju-VK4buEgnhexaobZGno6xigkiB3QosC5u36UyGUjeuOHTmFdtZECcKu_wo4o5bISn8QQAv_RIi5eLi8qJKP/s320/floor5.jpg" width="320" /></a></div>Using a ceiling mounted kinect and projector, the piece projects an guideline for where the users are able to stand in order either "erase" or "draw". The floor projection provides boundaries for where users can stand to interact with the piece itself.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLoxJSqGyyfYnHLsGzK-URb49cWelAgGllCbk17oF7JxNsuJv-du7ScdDPpDlePLE-V7cFKyGAiA0rwDUbZnMqhITBI0VL0ZsPIfu5W1NV5PmGUHUuewya8inA1ESEAHdjyvRovWBK-6ST/s1600/_POP1863.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLoxJSqGyyfYnHLsGzK-URb49cWelAgGllCbk17oF7JxNsuJv-du7ScdDPpDlePLE-V7cFKyGAiA0rwDUbZnMqhITBI0VL0ZsPIfu5W1NV5PmGUHUuewya8inA1ESEAHdjyvRovWBK-6ST/s320/_POP1863.jpg" width="212" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgN8HCm0aAjsdRCJ8RwBTX_ycYwkp0W0ComZ7kt4ZVFa3A96egb3Rv1UEv2gM844uT6ZxDP-r2S8vHMoOUs_k826cqwuef0YFqs0i8D0cTby-v6E7Y5TP7IImhOQLGwMN7iWbRw90M7qUjM/s1600/_POP1864.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgN8HCm0aAjsdRCJ8RwBTX_ycYwkp0W0ComZ7kt4ZVFa3A96egb3Rv1UEv2gM844uT6ZxDP-r2S8vHMoOUs_k826cqwuef0YFqs0i8D0cTby-v6E7Y5TP7IImhOQLGwMN7iWbRw90M7qUjM/s320/_POP1864.jpg" width="212" /></a></div>The interaction occurs via the overhead mounted Kinect, in combination with a front facing projector. The user raises their hand above their head, and the hand is detected via blob tracking. The front facing projection then takes these blobs as input to either erase or draw with a texture on a permanent background.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0XxIjQ37nYsBPR3EocVsvi4SiEuXLDJRLcoRLoOrpFTKr5f6HvUzos-CcgyEUVet5kKQgHUR67SvFsxby-Yf-y5jrtGF62AnWII3aJfOseSmuXHFiG-djfSe6hbKCwru-cw5sEFdItLYF/s1600/Screen+shot+2011-03-29+at+2.49.42+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0XxIjQ37nYsBPR3EocVsvi4SiEuXLDJRLcoRLoOrpFTKr5f6HvUzos-CcgyEUVet5kKQgHUR67SvFsxby-Yf-y5jrtGF62AnWII3aJfOseSmuXHFiG-djfSe6hbKCwru-cw5sEFdItLYF/s320/Screen+shot+2011-03-29+at+2.49.42+PM.png" width="320" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhUiINT0rMqb5ISQQe1I-8sSJ3afMGePBhH9M6aPqdRb7PzbGMB3min2Z8PxDt8QAPLyF6cU5o2JNWMjCElVbuVi-XDrgLGAZa9U50NXlbqqj3uJQkaUYrqyVOPKuC_KPZAJ0qBeJ9ETlV/s1600/Screen+shot+2011-03-29+at+2.50.55+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="237" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhUiINT0rMqb5ISQQe1I-8sSJ3afMGePBhH9M6aPqdRb7PzbGMB3min2Z8PxDt8QAPLyF6cU5o2JNWMjCElVbuVi-XDrgLGAZa9U50NXlbqqj3uJQkaUYrqyVOPKuC_KPZAJ0qBeJ9ETlV/s320/Screen+shot+2011-03-29+at+2.50.55+PM.png" width="320" /> </a></div><div class="separator" style="clear: both; text-align: left;">The interaction succeeds by using the Kinect's depth map and feeding it to OpenCV. By setting a height threshold, it doesn't detect movement below a certain height. As the user raises their hand, it detects which side they are on (draw/erase) and draws a cursor (green/red) to indicate where their "brush" is located. These cursors are mapped to a larger space, as they only take up half of the kinect window, but need to be able to traverse the whole of the projected image.</div><div class="separator" style="clear: both; text-align: left;"><br />
</div><div class="separator" style="clear: both; text-align: left;"></div><div class="separator" style="clear: both; text-align: center;"> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6cCdRvKZvBlNCj0RX4N6qlZrr8JAghQq0HuqtLKvyMXbYe493VRvZRCIy9Ezu_wzZhRrCpSifvCv4jSuW10vVjnCbP3SNI45A6I1KHUOf90PI3oto3fSH-ZtX_lFp08P1KNGW0kiDJdqw/s1600/Screen+shot+2011-03-29+at+2.51.06+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="235" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6cCdRvKZvBlNCj0RX4N6qlZrr8JAghQq0HuqtLKvyMXbYe493VRvZRCIy9Ezu_wzZhRrCpSifvCv4jSuW10vVjnCbP3SNI45A6I1KHUOf90PI3oto3fSH-ZtX_lFp08P1KNGW0kiDJdqw/s320/Screen+shot+2011-03-29+at+2.51.06+PM.png" width="320" /> </a> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGjTQPGrvC_f-j0fvMcOr_epHEgs6zj_zT66pQbXaHk_ilHCFAurlu9cyn2dUlzrU2awRGj32KO_kNwsZalqi9uu0in6EPAN7KPzIbDKUVEPiTGSUC6H11_RG83ajXJplYEgn3kNc0frpE/s1600/Screen+shot+2011-03-29+at+2.51.12+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="232" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGjTQPGrvC_f-j0fvMcOr_epHEgs6zj_zT66pQbXaHk_ilHCFAurlu9cyn2dUlzrU2awRGj32KO_kNwsZalqi9uu0in6EPAN7KPzIbDKUVEPiTGSUC6H11_RG83ajXJplYEgn3kNc0frpE/s320/Screen+shot+2011-03-29+at+2.51.12+PM.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;">As users interact in the space, the image becomes a combination of their drawing and erasing "work", and reflects their behavior over time. The space supports multiple users in each area, and can also support a variety of backgrounds and brushes. <i><b> </b></i></div><div class="separator" style="clear: both; text-align: left;"><br />
</div><div class="separator" style="clear: both; text-align: left;"><i><b>code: <a href="http://patrickproctor.com/code/Drawerasespace.java">Drawerasespace.java</a></b></i></div>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-7108861578009726402010-04-26T16:52:00.000-07:002010-04-26T16:52:34.547-07:00Nature Of Code Final: Game Of LifetrisAs I <a href="http://iamjacksgraduateeducation.blogspot.com/2010/03/nature-of-code-final-project-proposal.html">originally outlined in my proposal</a>, for my <a href="http://itp.nyu.edu/varwiki/Syllabus/Nature-of-Code-S10">Nature Of Code</a> final I decided to create an implementation of Conway's game of life, and combine it with the classic computer game Tetris. The result was one that, while not terribly <i>fun </i>to play, actually does go a long way in demonstrating that the game of life is more complex than it appears, and despite its simple rules, offers of a large degree of randomness.<br />
<br />
I started by trying to discern the basics of the game, and came to the conclusion that I would need to start with some pieces/organisms already on the board. The game of life tends towards decimating populations, so having no starting population lends itself to a quickly emptied (and boring) board. Moreover, given the game's natural tendencies, I decided that I would make the aim of the game to allow pieces to live, rather than to destroy them (as was true in the original tetris).<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOI6Z15jrJ025-fvqGEX42JajmvcQqtO09KxXmUEEIiPti3gHIyAqzikwyVrFf57vjVTKDDgp_I4mRev2FsAXET2Y9ld4XDMGm_BHDIjGAU34Ly4N7_VgSGxKiHz2AaAfAjJnDIVmmUr2O/s1600/Picture+2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOI6Z15jrJ025-fvqGEX42JajmvcQqtO09KxXmUEEIiPti3gHIyAqzikwyVrFf57vjVTKDDgp_I4mRev2FsAXET2Y9ld4XDMGm_BHDIjGAU34Ly4N7_VgSGxKiHz2AaAfAjJnDIVmmUr2O/s400/Picture+2.png" width="217" /></a></div>I took this implementation and coded a simple version that was in black and white and played out a generation of the game of life after each tetris piece was dropped. Once this was complete, I noticed a major problem: Despite my hopes for the contrary, there's little way to discern what will happen in the next round of the game.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMYCQx-4IEqzMfReroj-UBZjViS0ke41_QptMNRB24aWv9rThrnVYPoCDh-eO-W-oht9ctBIwIIPLp_BDInQDBB44RRiPojtYWdgCr5MeBtHSIvLbnuVGovXRtndywukE9NK2P80KI9lQB/s1600/Picture+3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMYCQx-4IEqzMfReroj-UBZjViS0ke41_QptMNRB24aWv9rThrnVYPoCDh-eO-W-oht9ctBIwIIPLp_BDInQDBB44RRiPojtYWdgCr5MeBtHSIvLbnuVGovXRtndywukE9NK2P80KI9lQB/s400/Picture+3.png" width="217" /></a></div>To work my way around this, I next tried implementing a "future" system, where the game would calculate the results of your current piece placement ahead of time, and attempt to show them to you in a translucent clear projection on top of the current board. The problem with this is that, while interesting, it's still extremely difficult to tell whether you're creating or destroying pieces.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5oKff-rxPRlTmjh2Cko9WThmYS5PXB8HL3WIhLumzL37EEijxxkVvTEKUEM-O8JD89CR0LLgcSl2QTBXRdygcEyaaAIRUZYBs9XgG2pEO_1oOylLX8YDxcLVOBZbpxd439c1T7fGlAXMu/s1600/Picture+4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5oKff-rxPRlTmjh2Cko9WThmYS5PXB8HL3WIhLumzL37EEijxxkVvTEKUEM-O8JD89CR0LLgcSl2QTBXRdygcEyaaAIRUZYBs9XgG2pEO_1oOylLX8YDxcLVOBZbpxd439c1T7fGlAXMu/s400/Picture+4.png" width="217" /></a></div>Continuing on my quest for at least some level of usability, I reached for one final implementation. This time I maintained the "future" board concept, but also colored the tiles more strategically. This time I made tiles that stayed alive black, tiles that died red, and tiles that were born green. In this way, it's easy to try and maximize green and minimize red as you place your piece.<br />
<br />
Once the interface was intuitive, I added a scoring system based on how many organisms were alive after each turn, and played the game a bit. The unfortunate result is that it's not very fun. While tetris allows you to create strategies and begin to intuit moves, the game of life is simply too random to be able to do anything but try and pick correct placement on a turn by turn basis.<br />
<br />
Despite this fact, I think there could still be room for using the game of life mechanism to make for a fun game implementation. Some cases that still seem to hold water did come to mind: a game where there was a desire to finish with a very specific number of organisms, a game where all the organisms needed to be eliminated in a certain number of turns, or basically any game where the strategy was less about time and movement, and more about specific placement.<br />
<br />
In short, the game of life is too random of an algorithm to function desirably in a time and movement based game. I believe it would hold up far better under slower, ore strategic circumstances, and that tetris is certainly not.Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-3492891038579980982010-04-01T07:45:00.000-07:002010-04-01T07:45:34.466-07:00Sound And The City: Grid Music And Notation<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvky_jVsxx_0EzJ6zypKEL_biLOZ_UFXMu5-YGw_LSIBSw9SaVWYLCyzsQJy7bi8ALyFsFl9jhzd9Pp6EM_-Mijkh0-4m5-Mh-lIpMmlKIEw0yL2TXb7RzmGkK4b8_BKsp0L08WV9__Y9h/s1600/Picture+1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvky_jVsxx_0EzJ6zypKEL_biLOZ_UFXMu5-YGw_LSIBSw9SaVWYLCyzsQJy7bi8ALyFsFl9jhzd9Pp6EM_-Mijkh0-4m5-Mh-lIpMmlKIEw0yL2TXb7RzmGkK4b8_BKsp0L08WV9__Y9h/s320/Picture+1.png" /></a></div>Over the past few weeks, I've taken <a href="http://iamjacksgraduateeducation.blogspot.com/2010/03/sound-and-city-orchestra-seating-mk-2.html">the idea put forth with orchestraSeating</a>, and modified it significantly. The modifications have been an effort to reduce the deployment overhead, increase the quality of musical delivery, and make for a musical system that was less spatially specific. The result is a new version of the installation that I call, for reasons that will become obvious, Grid Music.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi__IfYiJDvlX4QTlLeAMdbDvfDyb_Cz4kL5t4Y-pQRgy20BT3sRteNFql6dvJGP5un-bWi5pS5CcSwLcU56KRis1NHZZbOevGWUWJuZb0kplXNRVon2ix3YN8rX8vjqMsjVZdwleIlKrmj/s1600/tully.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi__IfYiJDvlX4QTlLeAMdbDvfDyb_Cz4kL5t4Y-pQRgy20BT3sRteNFql6dvJGP5un-bWi5pS5CcSwLcU56KRis1NHZZbOevGWUWJuZb0kplXNRVon2ix3YN8rX8vjqMsjVZdwleIlKrmj/s320/tully.jpg" /></a></div>orchestraSeating was built around the premise of physical sensors, in a specific site, playing back multi-tracked versions of classical orchestra music. While this premise is interesting, it is lacking on a number of fronts. For one, it begs for a site-specific, resource-heavy installation (for example, the cafe at Alice Tully hall, above). For another, it constrains the piece to the domain of classical music, and therefore requires that the installation and the piece has some level of synchronicity.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEirSRZFw5fZXlGUnydUuHbVvHG_m5Av914gx0_jXYkUyoqHQRFEUmD8RMkK_fkpgmM6Li3JORNWFwFjOnAka8Ym7zO5E_5B3nt8UNmGh4Z2uX6O207_bYn_R3lBoUrixUoA9Ox-H0RiLiip/s1600/Picture+8.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEirSRZFw5fZXlGUnydUuHbVvHG_m5Av914gx0_jXYkUyoqHQRFEUmD8RMkK_fkpgmM6Li3JORNWFwFjOnAka8Ym7zO5E_5B3nt8UNmGh4Z2uX6O207_bYn_R3lBoUrixUoA9Ox-H0RiLiip/s320/Picture+8.png" /></a></div><div style="text-align: center;"><span style="font-size: x-small;"><i>A grid overlaid on the Alice Tully seating plan</i></span></div><br />
By contrast, gridMusic uses a grid overlaid on a public space to fuel a generative music engine. The engine uses an overhead camera in concert with the grid to monitor activity, and then follow a set of rules related to that activity. The rules are "activated" by movement within the space, and once activated, they cause playback of recorded clips, which yields the generative composition.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXr698gBqbhH_zCDiTTepv5GUAueBilME50NqL39wxhh2moTrw0R0j7OtOci2VKtt6HuDlxIEhDYslpqLnCM8RiPea5Lv1aYLBXIlZzYS2Tik6VdkoKU9MTGPQiecNmsESGj1HuPvFUAxY/s1600/Picture+5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXr698gBqbhH_zCDiTTepv5GUAueBilME50NqL39wxhh2moTrw0R0j7OtOci2VKtt6HuDlxIEhDYslpqLnCM8RiPea5Lv1aYLBXIlZzYS2Tik6VdkoKU9MTGPQiecNmsESGj1HuPvFUAxY/s320/Picture+5.png" /></a></div>Much like Terry Riley's "In C", there are a set of clips (in this case, 20) that the algorithm has to choose from. Also similar to Riley, the parts must be played in order. However, the manner in which they are selected is based not on the personal preferences of the players, but by movement within the grid. When movement is detected in a square of the grid, that square begins playback of Part 1 will continue looping until movement ceases in that square of the grid, and then begins again. When movement <i>restarts</i> in a given square of the grid, that square is advanced to the next part.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghrMz4rTlmelaiSYd9mpF-xPzT7svQF66GiX_5dYlgdnXLvenxXWH9bXjs2pZJ5Wwxvrwt2C4XjgJxq8FnP1kx9tx3oRf3hdLAxYIXqplw9hndpdhSoZ5ebeFjNpufCsHWRoIySMK1M5oI/s1600/Picture+2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghrMz4rTlmelaiSYd9mpF-xPzT7svQF66GiX_5dYlgdnXLvenxXWH9bXjs2pZJ5Wwxvrwt2C4XjgJxq8FnP1kx9tx3oRf3hdLAxYIXqplw9hndpdhSoZ5ebeFjNpufCsHWRoIySMK1M5oI/s320/Picture+2.png" /></a></div>This set of rules allows for easy visualization and state of the piece. In other words: notation. The squares that are active are colored, and indicate which clips are currently playing. Every time any square on the grid changes, a new file representing the grid is saved, complete with a timestamp. With the traversal of the various grids and their associated time stamps, one can easily discern the composition that was played back at the site for a given performance.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivA60s5Im1A0Gkm_l9i_uHXv3A08n5t0MtRQHrSzVwgIvq1aeVhTVnPMcFyr49Zt71qgjC-2fvFE8ultj2Ir7L7DiUdRq5rTtcLgQvTABaH6oj40xzrcLfTc9sbZNwK7C-vmkZ7MFjfb8L/s1600/Picture+3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivA60s5Im1A0Gkm_l9i_uHXv3A08n5t0MtRQHrSzVwgIvq1aeVhTVnPMcFyr49Zt71qgjC-2fvFE8ultj2Ir7L7DiUdRq5rTtcLgQvTABaH6oj40xzrcLfTc9sbZNwK7C-vmkZ7MFjfb8L/s320/Picture+3.png" /></a></div>Because of the visually pleasing nature of the grid, it would not be unheard of to involve the grid in some way at the site. This could be done as a projection or display on a monitor, and would perhaps help to invite the participation of individuals as they acted in the space.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidg_P1ljbUVnnw-SuoEYKtXCAcGK5MqyExSkHb82xCvXRfVIdvycETOXnVGLgvvPRKf94EJWAjNrooKHYBlt8pZ7jUgV2hVKHFFFINg_Pnavqq4DYfZimt5vS6VB_BdK9rBsT-jqRzWON9/s1600/Picture+6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidg_P1ljbUVnnw-SuoEYKtXCAcGK5MqyExSkHb82xCvXRfVIdvycETOXnVGLgvvPRKf94EJWAjNrooKHYBlt8pZ7jUgV2hVKHFFFINg_Pnavqq4DYfZimt5vS6VB_BdK9rBsT-jqRzWON9/s320/Picture+6.png" /></a></div>As activity increased, the grid would become progressively more active, with colors varying and changing as per the algorithm's specification. This would create a lively, interactive visual that would compliment the audio portion of the installation. Moreover, if the composition were broadcast live to the web, it could be similarly accompanied by the progression of the grids.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvky_jVsxx_0EzJ6zypKEL_biLOZ_UFXMu5-YGw_LSIBSw9SaVWYLCyzsQJy7bi8ALyFsFl9jhzd9Pp6EM_-Mijkh0-4m5-Mh-lIpMmlKIEw0yL2TXb7RzmGkK4b8_BKsp0L08WV9__Y9h/s1600/Picture+1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvky_jVsxx_0EzJ6zypKEL_biLOZ_UFXMu5-YGw_LSIBSw9SaVWYLCyzsQJy7bi8ALyFsFl9jhzd9Pp6EM_-Mijkh0-4m5-Mh-lIpMmlKIEw0yL2TXb7RzmGkK4b8_BKsp0L08WV9__Y9h/s320/Picture+1.png" /></a></div>As each square reached it's final "switch" from the black "part", it would return to it's original white, inactive state. It would remain this way until all the squares came to rest, at which point after a duration of 5 minutes down time, the piece would begin again.Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com2tag:blogger.com,1999:blog-5985145776529519536.post-57471091172105700222010-03-30T22:41:00.000-07:002010-03-31T10:07:07.664-07:00Pixel By Pixel: Multiple Perspectives In 2D<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3FRdQmZtdrigqhoovRu2bIRxTz9uBi7T4eeNr1cUHSol1FIkBW0W5mTE57mxysQTtRw9wV2gdImvV2f4QtufrwuAkp4m2CZxybZDJV7gFZtXsW4FCpv3t7DHtoQuuY0CSMcPrbLCi5vjY/s1600/Picture+8.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="297" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3FRdQmZtdrigqhoovRu2bIRxTz9uBi7T4eeNr1cUHSol1FIkBW0W5mTE57mxysQTtRw9wV2gdImvV2f4QtufrwuAkp4m2CZxybZDJV7gFZtXsW4FCpv3t7DHtoQuuY0CSMcPrbLCi5vjY/s400/Picture+8.png" width="400" /></a></div> This week in <a href="http://itp.nyu.edu/varwiki/Syllabus/Pixels-S10">Pixel By Pixel</a>, we were asked to take various painting styles discussed in class to inspire an interactive piece. As such, I decided to leverage <a href="http://iamjacksgraduateeducation.blogspot.com/2010/03/pixel-by-pixel-pixel-transformation.html">last week's work in image banding</a>, and attempt to emulate Picasso's attempts at forcing multiple perspectives into a single 2D plane.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjz7QgKcjCdXIBaorYGwkxUHZNuaCWlGZjh8zi_t6GavUbqEocRY7PTWs_0p2JOXegEElFVEzCp_a_BnpmLbyEimCXAkCoCbAnxUOqwMBO1cmrrYYYHEKhpzF5cAuUje1m4rs9iZYFwFvn4/s1600/2011018397jpg-05706d7e20ff67d2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjz7QgKcjCdXIBaorYGwkxUHZNuaCWlGZjh8zi_t6GavUbqEocRY7PTWs_0p2JOXegEElFVEzCp_a_BnpmLbyEimCXAkCoCbAnxUOqwMBO1cmrrYYYHEKhpzF5cAuUje1m4rs9iZYFwFvn4/s320/2011018397jpg-05706d7e20ff67d2.jpg" /></a></div><div style="text-align: center;"><span style="font-size: x-small;"><span style="font-style: italic;">Picasso's "Portrait Of Dora Maar"</span></span></div><div style="text-align: center;"><span style="font-style: italic;"> </span></div>In order to accomplish this, I modified my program to accept two camera inputs, one for each perspective. I then added a keyboard interaction to allow the user to modify the banding resolution. The result is a program that can look at two perspectives, and divide them amongst image bands accordingly.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3GjyowsSnWDr4OJfzpx1g_Gp9DzrxSYfkhzRfaqyioUGMLnQWDXWF_qN1IcLhH_L71y2g4tBlQawaX8CpFM1YRGEQ08fwcXJQ3n6l3791fO9x6BAWuL2a9VXCOATOTyQx54anDZ96_Z_g/s1600/Picture+1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3GjyowsSnWDr4OJfzpx1g_Gp9DzrxSYfkhzRfaqyioUGMLnQWDXWF_qN1IcLhH_L71y2g4tBlQawaX8CpFM1YRGEQ08fwcXJQ3n6l3791fO9x6BAWuL2a9VXCOATOTyQx54anDZ96_Z_g/s400/Picture+1.png" width="400" /></a></div> In order to achieve an effect similar to Picasso's, it's necessary that the user carefully align their camera angles so that the perspective in question is mutually centered in each camera. This will ensure the unity necessary in the merging of the 2D planes into one. In the image above, I've centered myself in the frame of both camera perspectives.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTc-ctMa-51wa9qJjjwIBbdqTV2TTdTrlqkjDRYSN1DS0qsMCTTiZOu_7ggsgvYrEda2ip5ql_u65TjN2JIKSj-z4OM_KVIpkGzIAWo2frT0QUww0s-KxRehyUQXQI1lXRAWZQ-lD1TwyR/s1600/Picture+2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="298" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTc-ctMa-51wa9qJjjwIBbdqTV2TTdTrlqkjDRYSN1DS0qsMCTTiZOu_7ggsgvYrEda2ip5ql_u65TjN2JIKSj-z4OM_KVIpkGzIAWo2frT0QUww0s-KxRehyUQXQI1lXRAWZQ-lD1TwyR/s400/Picture+2.png" width="400" /></a></div>As the user interacts with the banding resolution, a variety of effects occur. Here, we can see that at low banding resolutions, there are very obvious differentiations between the two images, resulting in recognizable pieces of each image, and a feeling of displacement for those pieces.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiktPe67JDJ63GZb0vp7ZY0uGC2zaPflJmyoK4ox6Yhj0M57BmbFp711ioAj9SIuvwHO9jJ1wbg8kRmhnu-swhei0Habru448fFK2dCOakJfZLVfC67rhHWWLChD4tizQo6InigTbzPMZqf/s1600/Picture+7.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="297" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiktPe67JDJ63GZb0vp7ZY0uGC2zaPflJmyoK4ox6Yhj0M57BmbFp711ioAj9SIuvwHO9jJ1wbg8kRmhnu-swhei0Habru448fFK2dCOakJfZLVfC67rhHWWLChD4tizQo6InigTbzPMZqf/s400/Picture+7.png" width="400" /></a></div>As the banding resolution is increased, the feeling of displacement is reduced. Instead the image yields more of a feeling of simultaneous existence, with both images occupying the same space. This is due to the increased resolution revealing a more evenly distributed rendering of each perspective, despite using exactly the same number of pixels. At these higher resolutions, there is less necessity for the object or scene to be centered, as the increased clarity allows for both perspectives to be seen regardless of positioning. (This effect can be seen at the top of this entry)<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUygbzSq5kg41xoYyhDR25QvC2ZrEJWKXKysibBhxs0XlTEzF26HKORNtA5DeRYd7NP4XqNtspoYu6nZcyEvP4pOo5HK8vZam8yWtsKeS7OCP-T68kWpPWFfUhcKFf6i9jJvTEBKK6WVAq/s1600/Picture+10.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="297" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUygbzSq5kg41xoYyhDR25QvC2ZrEJWKXKysibBhxs0XlTEzF26HKORNtA5DeRYd7NP4XqNtspoYu6nZcyEvP4pOo5HK8vZam8yWtsKeS7OCP-T68kWpPWFfUhcKFf6i9jJvTEBKK6WVAq/s400/Picture+10.png" width="400" /></a></div>Because the image banding controls are distinct, they can also be used to combine the two effects. This can result in a striped pattern that allows for some of the best of both worlds. The increased resolution in one dimension increases clarity, while the lower resolution in the other dimension allows for the feeling of displacement, and the clear existence of two perspectives.<br />
<br />
This two camera application of image banding and perspective is clearly in the early stages. Most notably, a fixed and aligned camera configuration might yield more consistent images and interaction. Additionally, a much larger number of cameras could be used, resulting in further displacement and perspective collisions. For example, four cameras aligned on an x-y axis could result in a 2D image that showed perspective on an object or scene from all sides. Alternatively, motorized camera mounts could allow the user to control perspective, and thus control the displacement in the 2D image they were creating.Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-67285109936396574512010-03-30T07:49:00.000-07:002010-03-30T07:49:51.770-07:00Nature Of Code: Final Project Proposal, Tetris and Conway's Game Of Life<div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaMUvE9n2MVHO5auGSGg-uYsZuU-j2_Me-XIkpd66oBjzl6nj6FEgf73L0AhMVSJSK0JsP56mjNmxxzCXE27BUD_o5Ixg203c_U3MdiCegdI6fTpmnla526rNpkqXvuyPg3Pfc5PMj1ZeT/s1600/gol+small.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaMUvE9n2MVHO5auGSGg-uYsZuU-j2_Me-XIkpd66oBjzl6nj6FEgf73L0AhMVSJSK0JsP56mjNmxxzCXE27BUD_o5Ixg203c_U3MdiCegdI6fTpmnla526rNpkqXvuyPg3Pfc5PMj1ZeT/s400/gol+small.jpg" width="300" /></a></div><br />
Our <a href="http://itp.nyu.edu/varwiki/Syllabus/Nature-of-Code-S10">Nature of Code</a> final project is about as abstract as you can get, with the option to leverage just about any and all sides of the various phenomena. What's more, the visualization (or lack thereof) is also completely open ended. In short, we were asked to look at the huge amount of material covered this semester, and get inspired.<br />
<br />
For me, that meant childhood video games - Specifically, <a href="http://en.wikipedia.org/wiki/Tetris">Tetris</a>. While reading <a href="http://www.ibiblio.org/lifepatterns/october1970.html">this article</a> on <a href="http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life">Conway's Game Of Life</a>, I was struck by how similar the cellular shapes were to Tetris pieces. This got me thinking as to whether there might be a way to combine the two into a video game that yielded novel game play, driven by cellular automata.<br />
<br />
While I'm still fleshing out the idea, you can check out my thoughts so far in the proposal below.<br />
<br />
<span style="font-size: x-small;"><i><b>ppt: <a href="http://www.patrickproctor.com/sounds/ITP/Tetris%20GOL%20Proposal.ppt">Nature of Code Final Project Proposal - Tetris and Conway's Game Of Life </a></b></i></span>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-20960641440801254322010-03-23T21:52:00.000-07:002010-03-23T21:52:57.655-07:00Pixel By Pixel: Pixel Transformation<div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiX5El0rue5khCS2HIDdPXG95-kf3e20nhrmAiQZ2TmTTlAHX0J1qsvpyVCHvjQ4xyxTRYePcFQBkxg9B_El20aQHvzj6cGj23uamUeRj87tQqy2UZ670mVnVz6YOvdNA0eR71vIqnLNW2e/s1600-h/Untitled+10.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="301" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiX5El0rue5khCS2HIDdPXG95-kf3e20nhrmAiQZ2TmTTlAHX0J1qsvpyVCHvjQ4xyxTRYePcFQBkxg9B_El20aQHvzj6cGj23uamUeRj87tQqy2UZ670mVnVz6YOvdNA0eR71vIqnLNW2e/s400/Untitled+10.jpg" width="400" /></a></div><br />
This week in <a href="http://itp.nyu.edu/varwiki/Syllabus/Pixels-S10">Pixel By Pixel</a>, we were asked to delve into the world of pixel transformation. Put simply, taking the individual pixels in an image, and processing them to change location in the grid. The goal was a result that was "dynamic and interactive". <br />
<br />
<b>Part 1: Reflected Pixels</b><br />
I began the exercise by subdividing both horizontal and vertical pixels by two, and reflecting the pixels to the remaining three quadrants. The result was a program that is both entertaining and visually dynamic. As can be seen from the images below, it allows for (particularly with facial anatomy) imagery that instinctually feels deformed or distorted. However, it can feel pretty simplistic, along the lines of a house of mirrors. <br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGfhDBinbU3ve9RMrCSvIyA6Lkx5t0Wj7HACfc0yDx_Cjq9_O1zNHuBhzEV7XZcRIwNvqHpxv0X4-VdqDVPbvEyeWkroeDpRHig9srR8I3Qagm2bL7YV4c4Rik_O7e8JOx16BkUEK1_GJZ/s1600-h/Untitled+00.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGfhDBinbU3ve9RMrCSvIyA6Lkx5t0Wj7HACfc0yDx_Cjq9_O1zNHuBhzEV7XZcRIwNvqHpxv0X4-VdqDVPbvEyeWkroeDpRHig9srR8I3Qagm2bL7YV4c4Rik_O7e8JOx16BkUEK1_GJZ/s400/Untitled+00.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhED7uKKTSYwqTSR_VV7VHIozb2XFZEHtRq5NxYIzPTzWEnEe9w1-clddImoCnCvZnE1s_PuB9PyzZFM37x55HVmKlAbNLfEagjhPazGpAv4mc8XuZl01Gs8Az8iX3OYzKc6SmS2zrx4VjN/s1600-h/Untitled.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="301" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhED7uKKTSYwqTSR_VV7VHIozb2XFZEHtRq5NxYIzPTzWEnEe9w1-clddImoCnCvZnE1s_PuB9PyzZFM37x55HVmKlAbNLfEagjhPazGpAv4mc8XuZl01Gs8Az8iX3OYzKc6SmS2zrx4VjN/s400/Untitled.jpg" width="400" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKc1r-5mXQVAh7kujCzIZUJE5ToZkgTvBequceqeAMe3WMVlBV9JawcXtrGi0ZT9VYUsyZ7uuuagkL5YVQOwoZfpwrgRP0DUrHhdpMA2IAEX99_Cz0ppuJ4oIPIOQEKphRotXV3uJPDZHF/s1600-h/Untitled+2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKc1r-5mXQVAh7kujCzIZUJE5ToZkgTvBequceqeAMe3WMVlBV9JawcXtrGi0ZT9VYUsyZ7uuuagkL5YVQOwoZfpwrgRP0DUrHhdpMA2IAEX99_Cz0ppuJ4oIPIOQEKphRotXV3uJPDZHF/s400/Untitled+2.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEYozj0OEGH2REVw8VZSkhxxWdaVjvNTuUmRDp4ehbjqbIc9kluvVK-DNTeCrIGl-hqiU1nY1V0hhKhn_sbjv0TJsGySuZX94mEZLZsRPGgqOdekrdE40JvANa2xrIpq0fCHOuFkJ0xZ5n/s1600-h/Untitled+5.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="297" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEYozj0OEGH2REVw8VZSkhxxWdaVjvNTuUmRDp4ehbjqbIc9kluvVK-DNTeCrIGl-hqiU1nY1V0hhKhn_sbjv0TJsGySuZX94mEZLZsRPGgqOdekrdE40JvANa2xrIpq0fCHOuFkJ0xZ5n/s400/Untitled+5.jpg" width="400" /></a></div><b><br />
</b><br />
<b>Part 2: Banding</b><br />
While experimenting with modifying the reflection algorithm, I modified the factor by which the horizontal reflector was divided. This resulted in the banding pattern seen above. Observing the pattern caused me think that intentional banding could create compelling imagery, based around the concepts of repetition, patterns, and blending.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg19NeRYN7KoMY_D3IMwl6zWMiEcan2DCn0yrBH7uZIoEtnsxBLAqfLaj7kB6oTJt0-_TFVE_FXJAqPZ7ZOeXD11HhB7MIP0jilLYP92zgFQsOv9pTHDromAcfOXtEo6alsfCt6M3l0XB4f/s1600-h/Untitled+6.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg19NeRYN7KoMY_D3IMwl6zWMiEcan2DCn0yrBH7uZIoEtnsxBLAqfLaj7kB6oTJt0-_TFVE_FXJAqPZ7ZOeXD11HhB7MIP0jilLYP92zgFQsOv9pTHDromAcfOXtEo6alsfCt6M3l0XB4f/s400/Untitled+6.jpg" width="400" /></a></div><br />
<b>Part 3: Banding Grids</b><br />
I decided to pursue the banding algorithm, and to do so in both the vertical and horizontal dimensions. I achieved the desired result by grabbing a section of pixels, and then reapplying it using modulo math. Since the modulo resets every time its argument is reached, the algorithm would then start anew and redraw the desired band. The result is a "banding" grid of the desired area, as can be seen above.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqlWRcTyG_1CSkuQa1u7YHP25IIM3nx4MlKaoKx-rXDqPZKaQDQAyaRJfACuFbIc56KVwjvptrUPMzSqRYbhB5NcZOl1NXWHzjt7fsWMkd1onclcP11GO6QdHtyrp_33xlvvIzlgsWN_bO/s1600-h/Untitled+7.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="298" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqlWRcTyG_1CSkuQa1u7YHP25IIM3nx4MlKaoKx-rXDqPZKaQDQAyaRJfACuFbIc56KVwjvptrUPMzSqRYbhB5NcZOl1NXWHzjt7fsWMkd1onclcP11GO6QdHtyrp_33xlvvIzlgsWN_bO/s400/Untitled+7.jpg" width="400" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3eVrvcBtm0HtBYv9daF99TCs7n70zlUZ6enrK_02JdyeVn1kCNljad_hyJ4gluuixgmRVSNty4l0NUcyNnaNpqMo4JOy5Kv4C0LYuS0tdk9TLYID3dzcOal3Jr62TnYtc8L3UQlx9p2oQ/s1600-h/Untitled+8.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3eVrvcBtm0HtBYv9daF99TCs7n70zlUZ6enrK_02JdyeVn1kCNljad_hyJ4gluuixgmRVSNty4l0NUcyNnaNpqMo4JOy5Kv4C0LYuS0tdk9TLYID3dzcOal3Jr62TnYtc8L3UQlx9p2oQ/s400/Untitled+8.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlQCE-cBUj8MOMVTRJLatds35dY0SSQiHuHWthTQm5AnIUx1HRieWmLa176zUITLYqAeiwq3nO-CvdJ04c_4I06o1NMvYDY-unenLO1NUTFo7VOnHBi30rcrMPNo6aZZcq9S_oVlKtGXRb/s1600-h/Untitled+9.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlQCE-cBUj8MOMVTRJLatds35dY0SSQiHuHWthTQm5AnIUx1HRieWmLa176zUITLYqAeiwq3nO-CvdJ04c_4I06o1NMvYDY-unenLO1NUTFo7VOnHBi30rcrMPNo6aZZcq9S_oVlKtGXRb/s400/Untitled+9.jpg" width="400" /></a></div><b><br />
</b><br />
<b>Part 4: Abstract Banding Grids And Resolution</b><br />
While a low resolution banding grid can be more compelling in terms of subject recognition, a higher resolution grid can actually yield more unusual and abstract patterns, particularly when it comes to subject blending, and perceived appearance of the subject.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNcDQZYXBlGMFOX1nn-O6Pb5m0OfWzD9Fy8utVX5BzHEV1huNJBdETlBHixb8r-oy_b9vu8SA2VgfcJcUsP42KV5UYSkmBW3spmFIpuwDvURjdtXfv6ikQgMaLlzdLnEwImKiDLYfkhU9Q/s1600-h/Untitled+11.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="297" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNcDQZYXBlGMFOX1nn-O6Pb5m0OfWzD9Fy8utVX5BzHEV1huNJBdETlBHixb8r-oy_b9vu8SA2VgfcJcUsP42KV5UYSkmBW3spmFIpuwDvURjdtXfv6ikQgMaLlzdLnEwImKiDLYfkhU9Q/s400/Untitled+11.jpg" width="400" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixfzquVCDmRxlYMTcNXL8fDgav0rhJ0QzVURRMQbJmwY5YSkNq_iR-IIJb0f-iEfP7ZNlehHA40dljGd6BP2UEAPBnJrTEkfO4-Ofgjc_tjvzc7omoP5_40aGdDDkFLJ0deKls7yuand7n/s1600-h/Untitled+12.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="297" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixfzquVCDmRxlYMTcNXL8fDgav0rhJ0QzVURRMQbJmwY5YSkNq_iR-IIJb0f-iEfP7ZNlehHA40dljGd6BP2UEAPBnJrTEkfO4-Ofgjc_tjvzc7omoP5_40aGdDDkFLJ0deKls7yuand7n/s400/Untitled+12.jpg" width="400" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxfEQCjcvcoS9ll_DOK7Mbfx0unqT5_feRexbAV1N2AxCWHtPxS2pK8YF8xFkjcX-S3aZJO06OFI9oH0gKIfIWoXv8KAxFuAhzKeiv0wDoW8_K2oJ5djCPD6oRfPrJSfzX3qWZVRMEmnAG/s1600-h/Untitled+15.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="298" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxfEQCjcvcoS9ll_DOK7Mbfx0unqT5_feRexbAV1N2AxCWHtPxS2pK8YF8xFkjcX-S3aZJO06OFI9oH0gKIfIWoXv8KAxFuAhzKeiv0wDoW8_K2oJ5djCPD6oRfPrJSfzX3qWZVRMEmnAG/s400/Untitled+15.jpg" width="400" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmSzvEvPj9Ttbh-8UuXHJ88xQthgAuk-z9kQNnrOSEVsC83CiHnSKIfmctba4vqiHdAaOO9sLe1c2btY9BZTnmHzWu5gd3CvyqkvnYylvtKKG-50K13X7nWaaGg8ZvNSDuSplub6HKj90M/s1600-h/Untitled+13.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="301" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmSzvEvPj9Ttbh-8UuXHJ88xQthgAuk-z9kQNnrOSEVsC83CiHnSKIfmctba4vqiHdAaOO9sLe1c2btY9BZTnmHzWu5gd3CvyqkvnYylvtKKG-50K13X7nWaaGg8ZvNSDuSplub6HKj90M/s400/Untitled+13.jpg" width="400" /></a></div><span id="goog_1269404501613"></span><span id="goog_1269404501614"></span><br />
<b>Part 5: Subject Recognition In High Resolution Banding Grids</b><br />
Perhaps the most counter intuitive part about the high resolution grids is that despite their complexity and abstraction (see above), they are actually simply a grid of a repeated image. What this means is that if great care is taken, the subject can actually still be recognized, as can be seen from the recognizable lettering in the image below.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi49PL4QS4tTNCMVXHqylTGrQE9lPPksLLgkZWrt7zI2Nz3C7PVB05VxvQgLyBXSVNZ6bObxz9pcqKr2VUvBG7fhWxN_GQ8MKR9fDq5hFMyKeRWGKQ_fcidoXz47VyUxDRJyrZE9KV8sJh8/s1600-h/Untitled+14.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="298" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi49PL4QS4tTNCMVXHqylTGrQE9lPPksLLgkZWrt7zI2Nz3C7PVB05VxvQgLyBXSVNZ6bObxz9pcqKr2VUvBG7fhWxN_GQ8MKR9fDq5hFMyKeRWGKQ_fcidoXz47VyUxDRJyrZE9KV8sJh8/s400/Untitled+14.jpg" width="400" /></a></div><br />
The idea of banding grids manages to take a variable chunk of an existing screen, and repurpose it as a tool to build patterns that can be simultaneously abstracted and recognizable. What's more, it takes a group of pixels and repurposes them on a macro level such that the group itself becomes the implementation of the pixel. Further developments in this area might include using the mouse to move the selected band, implementing a constant-movement band that iterated across the image, or intelligently selecting bands of the implemented resolution to recreate the image itself, but in recursive bands that had in fact been sourced by the image.Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-72520341557077262172010-03-04T09:06:00.000-08:002010-03-04T09:06:12.373-08:00Sound And The City: Orchestra Seating Mk. 2<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6o4FoP_Yx4wE1QscYSweOzs1bhjtO53WphzHMrNZHR-z1kTl3zo9W1AWSB-hz0CA-KN2KPzxV8La99ocML0BQamEfwyr2-bGP9JWqRKhZz4aFaiv_VvVVfCR01alsGM5yTrTA0MGR8EW5/s1600-h/Slide1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6o4FoP_Yx4wE1QscYSweOzs1bhjtO53WphzHMrNZHR-z1kTl3zo9W1AWSB-hz0CA-KN2KPzxV8La99ocML0BQamEfwyr2-bGP9JWqRKhZz4aFaiv_VvVVfCR01alsGM5yTrTA0MGR8EW5/s400/Slide1.jpg" width="400" /></a></div>This week in <a href="http://www.soundcity.danielperlin.net/">Sound And The City</a> finds us re-presenting our final project proposals, this time with the addition of an exterior critique. As such, I've modified my initial proposal to include a specific site, and a more specific implementation plan. I've also implemented a rudimentary demo in Logic Pro to simulate the effect of the installation.<br />
<br />
<span style="font-size: x-small;"><i><b>ppt: <a href="http://www.patrickproctor.com/sounds/ITP/orchestraSeating%20Proposal%20midterm.ppt">orchestraSeating mk. 2</a></b></i></span>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-15611834126253479832010-03-03T07:15:00.000-08:002010-03-03T07:16:56.113-08:00Pixel By Pixel: Experiments In Thermal Imaging<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyz4PZtQbFwnIVYng5p0u3Fqz54_PVFnF0QdJlTUe2_5Ho_dBeycspJgMe0zNZRCbOnJk74T_qfDInlhEvSf-UzR1cD4Z8GuD9VJUI-VkndUGJKlOosY7MkDCDzGGAATkAmohGEeR2OJKH/s1600-h/myframe298.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="297" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyz4PZtQbFwnIVYng5p0u3Fqz54_PVFnF0QdJlTUe2_5Ho_dBeycspJgMe0zNZRCbOnJk74T_qfDInlhEvSf-UzR1cD4Z8GuD9VJUI-VkndUGJKlOosY7MkDCDzGGAATkAmohGEeR2OJKH/s400/myframe298.jpg" width="400" /></a></div><br />
This week in <a href="http://itp.nyu.edu/varwiki/Syllabus/Pixels-S10">Pixel By Pixel</a> we were asked to embark on a project inspired by light phenomena observed on our recent trip to the <a href="http://www.nysci.org/">New York Hall of Science</a>. As such, I decided to take the museum's thermal camera a step further, and actually create a thermal pixel grid, seen above. The grid (in theory) would use <a href="http://en.wikipedia.org/wiki/Thermoelectric_effect">peltier junctions</a> behind an insulated grid of copper tiles, which one could then heat individually using a microcontroller.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMOXno-yeQ4wSVLG4d4LoB-kIRvz_1uMOuLmgGee6jmXQNInyzum-GbAFHqA0pwdQ7b_gxMyKLCWNU0vodQdmypUFWDrldnA25hhxS0XQ1Pft7Sqs6bIs6OnSv07o9eXgVtLhVdHbkgCQa/s1600-h/IMG_0005.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMOXno-yeQ4wSVLG4d4LoB-kIRvz_1uMOuLmgGee6jmXQNInyzum-GbAFHqA0pwdQ7b_gxMyKLCWNU0vodQdmypUFWDrldnA25hhxS0XQ1Pft7Sqs6bIs6OnSv07o9eXgVtLhVdHbkgCQa/s400/IMG_0005.JPG" width="400" /></a></div><div style="text-align: center;"><i>The paper "canvas" frame</i></div><div style="text-align: center;"><br />
</div>Unfortunately, the project had challenges from the outset. The first was in the initial concept itself: Once I started testing the copper tiles with the peltier junctions, I discovered that something about the copper tiles, probably reflectivity, caused them to be invisible to the thermal camera. I tried aluminum as an alternative, and had the same effect. However, when I used paper, all seemed well, so that was my new solution.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjepKXsnhqOstqkrt21CVR-knZha7n1vWRtQpRTLUJALJO8YwpP4iRg5Qt1PdPnuEJ_-6lqeNw9HhkAR0bBrUk1NmEKCoL_oHaImTUE1oDdmeJdOPA3tEhhpOSPhOphes4F3bsKx7RZrJre/s1600-h/IMG_0003.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjepKXsnhqOstqkrt21CVR-knZha7n1vWRtQpRTLUJALJO8YwpP4iRg5Qt1PdPnuEJ_-6lqeNw9HhkAR0bBrUk1NmEKCoL_oHaImTUE1oDdmeJdOPA3tEhhpOSPhOphes4F3bsKx7RZrJre/s400/IMG_0003.JPG" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"></div><div style="text-align: center;"> <i>The four transistors of the circuit</i></div><div style="text-align: center;"><br />
</div>I created a frame for the paper canvas, and then endeavored to complete the full circuit required to power the four peltier junctions. This was achieved by creating a circuit consisting of four transistors and an external power source. The power source was required to adequately heat the junctions, and the transistors were used to control the power source's path to each junction.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiY_Pol8czb05uGMUUdHkKC9iml2nzffhzFs2bj4yIWQYypJLw85ebwTM-t50SAQMNiRLpsI5_KUMAKcAOg59fYzSppugBwsYIMp79_txzBokX631chOgC17pXyJqlWiOvKRx-MwJk-GR28/s1600-h/IMG_0002.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiY_Pol8czb05uGMUUdHkKC9iml2nzffhzFs2bj4yIWQYypJLw85ebwTM-t50SAQMNiRLpsI5_KUMAKcAOg59fYzSppugBwsYIMp79_txzBokX631chOgC17pXyJqlWiOvKRx-MwJk-GR28/s400/IMG_0002.JPG" width="400" /></a></div><div style="text-align: center;"><i>The fully wired canvas/box</i></div><div style="text-align: center;"><br />
</div>Once the circuit was complete, I mounted it (and the four peltier junctions) to the back of the canvas/box, thus enabling the entire unit to stand largely on its own, with the only outgoing connections being to a power source and the microcontroller.<br />
<br />
<div style="text-align: center;"><object height="300" width="400"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=9872046&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=9872046&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="300"></embed></object></div><br />
Once this was complete, I began to test the unit using test patterns, and then the true problem arose: the peltier junctions function smoothly for 20-30 seconds, but as they begin to retain heat, they begin to lose their ability to turn "off" as pixels, and they simply become a grid of pixels stuck in the "on" position. What's more, the thermal camera itself creates recurring (and irritating) scan lines. These effects can be seen (along with ITP in-lab antics) in the video grab from the thermal camera above. <br />
<br />
While the end result was ultimately a disappointment, the endeavor was not. The idea still seems feasible, even if peltier junctions are maybe not the appropriate solution. What's more, before the pixels fail, one gets a general conception of the idea trying to be achieved, and it's actually quite visually pleasing. What's more, I feel that the addition of a color camera could further the project to an even greater extent. Put simply: there are still many aspects to explore.Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-60370262239222457522010-02-25T17:36:00.000-08:002010-02-25T17:36:47.022-08:00Little Computers: Fun With Drawing<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhh83z3VsBV9EMGwRMg_QCuhc-jsiPu0qTysYFSXwoD-8Ub8ynH2GvMUdg0kImI9GnwJIp4Uw5fPT6jDA3FX-1Gk_ka6iWn2IGoBb2BZwcv6V-WDV-vrRtTXZodGJmPgTMT8tObUUBKqchyphenhyphen/s1600-h/Picture+1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhh83z3VsBV9EMGwRMg_QCuhc-jsiPu0qTysYFSXwoD-8Ub8ynH2GvMUdg0kImI9GnwJIp4Uw5fPT6jDA3FX-1Gk_ka6iWn2IGoBb2BZwcv6V-WDV-vrRtTXZodGJmPgTMT8tObUUBKqchyphenhyphen/s320/Picture+1.png" /></a></div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: left;">This week for <a href="http://littlecomputers.net/2010/">Little Computers</a>, we were asked to use iPhone drawing techniques to create an app that utilized them in an interesting or animated way. Given the weather, I decided to use precipitation for inspiration, and made an app based around rain. </div><div class="separator" style="clear: both; text-align: left;"><br />
</div><div class="separator" style="clear: both; text-align: left;">The app starts out (as above) with some clouds and a low lying body of water. As you shake your iPhone, the accelerometer detects the movement, and the app generates water drops in response. As the drops reach the body of water, it rises accordingly.</div><div class="separator" style="clear: both; text-align: left;"><br />
</div><div class="separator" style="clear: both; text-align: left;">Once the body of water has risen to the edge of the clouds, it detects this, and holds off creating rain so it can have a moment to recede. After that, you can start all over again!</div><div class="separator" style="clear: both; text-align: left;"><br />
</div><div class="separator" style="clear: both; text-align: left;">The shapes in the application were rendered in Quartz, the 2D rendering engine native to Mac OS.</div><div class="separator" style="clear: both; text-align: left;"><br />
</div><div class="separator" style="clear: both; text-align: left;"><b><i><span class="Apple-style-span" style="font-size: small;">github: </span></i></b><a href="http://github.com/pproctor-itp/FunWithDrawing"><b><i><span class="Apple-style-span" style="font-size: small;">FunWithDrawing</span></i></b></a></div>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-77727292488650725562010-02-25T08:58:00.000-08:002010-02-25T08:58:56.678-08:00Sound And The City: New Japanese Underground<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhF5CAI7spVNMEUgds6zh3zyk2q4A6WK627OxqSN8sP4zM1SBRVAQnW2aS61-h0JwO7q0hcr5lfNLkcHfvWob5ZiN__UHw-tUr2oe64foJ_0792b7eYopBjjSFrHe69xJcNpgD90CJPs3mf/s1600-h/opener.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhF5CAI7spVNMEUgds6zh3zyk2q4A6WK627OxqSN8sP4zM1SBRVAQnW2aS61-h0JwO7q0hcr5lfNLkcHfvWob5ZiN__UHw-tUr2oe64foJ_0792b7eYopBjjSFrHe69xJcNpgD90CJPs3mf/s320/opener.jpg" /></a></div>This week for <a href="http://www.soundcity.danielperlin.net/">Sound and the City</a>, <a href="http://itp.nyu.edu/%7Emt1597/anaesthetic/">Mark Triant</a>, <a href="http://materials.nassima.com/">Igal Nassima</a>, and I were asked to present on some aspect of sonic history. We collectively decided upon Japanese noise music, sometimes known as the "New Japanese Underground". Below is our presentation (complete with graphics and sound) to the class. Enjoy!<br />
<br />
<span style="font-size: x-small;"><i><b>pdf: <a href="http://www.patrickproctor.com/sounds/ITP/NJU_Slides.pdf">New Japanese Underground Presentation</a></b></i></span>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-35802620892979563842010-02-22T14:36:00.000-08:002010-02-22T14:37:04.193-08:00Nature Of Code: Midterm Proposal<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjE9Xo0mrNIomPjoBPZVM2iphb86cAR7Yf83UxUbKSSrDzaJx7Km9O8tCeT1VlFplHkPFuP_pUrO7uVlIYbvQWrtllZ9TJzlofR5aJpBTrDPKVm01W3UCST8OvXZC_wM-6plUVP0ENpnL8Y/s1600-h/Picture+1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjE9Xo0mrNIomPjoBPZVM2iphb86cAR7Yf83UxUbKSSrDzaJx7Km9O8tCeT1VlFplHkPFuP_pUrO7uVlIYbvQWrtllZ9TJzlofR5aJpBTrDPKVm01W3UCST8OvXZC_wM-6plUVP0ENpnL8Y/s320/Picture+1.png" /></a></div>For the <a href="http://itp.nyu.edu/varwiki/Syllabus/Nature-of-Code-S10">Nature of Code</a> midterm, we were asked to do the following: <i> </i><br />
<br />
<span style="font-size: x-small;"><i>Develop a proposal and a prototype for a "midterm" project. The scope of the project can be quite large (trial idea for a final, for example), however, you will not be expected to implement the entire project. For the proposal, include a description, relevant links, and a quick Processing sketch of the first step towards the idea. Link your proposal from the wiki. Next week, we will look at a selection of proposals and then on Mar 2/3, the results for all midterm projects will be presented.</i></span><br />
<br />
As such, I've decided to continue <a href="http://iamjacksgraduateeducation.blogspot.com/2010/02/nature-of-code-popcorn-modeling.html">my work on modeling popcorn</a> using the Box2D library. While the original attempt was moderately successful, there are a number of aspects that I'd like to modify and enhance so that the simulation will be more realistic. Specifically, they are to:<br />
<ul><li>Enhance the geometry of the popcorn so that the kernels and popped corns are not uniform in shape and size.</li>
<li>Work with density variables to create more realistic behavior as the corn pops and expands out of the kettle.</li>
<li> Work with kernel placement and velocity so that the kernels don't breach the kettle walls upon popping.</li>
<li>Allow for user interaction and variability with the simulation, including: amount of popcorn, turning the kettle on an off, </li>
<li>Create more realistic visualizations to associate with the simulation as a whole.</li>
</ul>These changes will also serve to further familiarize me with the Box2d library, and its ramifications for geometry and particle systems in a modeling context. The project will not be the prototype for a final project, but rather the final iteration of an earlier project before I embark on a final.Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-72134117735039954222010-02-16T07:59:00.000-08:002010-02-16T08:01:23.834-08:00Nature Of Code: Popcorn Modeling<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEU7mE0DZi9k3Icv0bFi7Sb6nAG_QCdAhI9KyJFh-eiJxx-ADVwpG_S0YlTCBk7YQaYV7YULTGkg0FCx9tEKwdqIFUhDqhzoFw_p4jXPiu6Rjgy2yzlucFWmCJ3w1TslyQKBZfFYo-Hk4l/s1600-h/Picture+1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEU7mE0DZi9k3Icv0bFi7Sb6nAG_QCdAhI9KyJFh-eiJxx-ADVwpG_S0YlTCBk7YQaYV7YULTGkg0FCx9tEKwdqIFUhDqhzoFw_p4jXPiu6Rjgy2yzlucFWmCJ3w1TslyQKBZfFYo-Hk4l/s320/Picture+1.png" /></a></div><i>This week in <a href="http://itp.nyu.edu/varwiki/Syllabus/Nature-of-Code-S10">Nature Of Code</a>, we were asked to use the <a href="http://www.box2d.org/">Box2D</a> library to implement a model of some real world phenomena. Box2D is a physics system that manages the physics of a given system, which you can add various physical "bodies" to. This allows you to create a realistic physics model for a given 2D system with very little coding overhead. For whatever reason, when I heard this, my mind immediately went to popcorn.</i><br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDQa3h0b8W819w3CdR0H8oXcbnsl68KrEfsSdqkcs3mOwGuektXU47Ce9HdpA4jfVrPtKwUvwVpwrwo1xtTm3733ev2ogvEn2C54-tYgSmwxbcqMTBxZfS8IBMv86RJ1qMLxYzYB10879S/s1600-h/Picture+2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDQa3h0b8W819w3CdR0H8oXcbnsl68KrEfsSdqkcs3mOwGuektXU47Ce9HdpA4jfVrPtKwUvwVpwrwo1xtTm3733ev2ogvEn2C54-tYgSmwxbcqMTBxZfS8IBMv86RJ1qMLxYzYB10879S/s320/Picture+2.png" /></a></div><i>For a smaller number (100) of kernels, my model (with a small amount of tweaking) actually works quite well. The kernels pop somewhat naturally, and the system generally handles the physics of the situation how you would expect it might: just like a real popcorn popper! You can see the effect of 100 kernels in the first two illustrations of this post.</i><br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9s78b4V5KJar9bAj3mNJfnEnm5-ig0CgdEWFrQQPGAfsTPUpHd1I0qqwg9nCsE58ue1mCPR1vuUhidp2C1gauRYlS5a25J6xYG4AURfBdWD5iaAPNA1S9YP8zm5Ty3fbdrA83hBmf9uif/s1600-h/Picture+3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9s78b4V5KJar9bAj3mNJfnEnm5-ig0CgdEWFrQQPGAfsTPUpHd1I0qqwg9nCsE58ue1mCPR1vuUhidp2C1gauRYlS5a25J6xYG4AURfBdWD5iaAPNA1S9YP8zm5Ty3fbdrA83hBmf9uif/s320/Picture+3.png" /></a></div><i>However, from there I started adding more kernels, and got into some trouble. At 150 kernels (above) the physics started behaving a bit erratically, and areas with a lot of popping density would cause kernels to move through solid objects.</i><br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKj4WWJxbTxbDPPVeYxabWMwitDxpv2ONKlLqP3pEkuHm-l74NoxfhF65xoSDQ8hu0ZJ6bNwqBCNll9M-Lj_OTiIiv8xPGkHIfKaFwb5SESb7ch2a7m4ZJjZZegXKM5Z2CSuUgV_Ai4pDG/s1600-h/Picture+4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKj4WWJxbTxbDPPVeYxabWMwitDxpv2ONKlLqP3pEkuHm-l74NoxfhF65xoSDQ8hu0ZJ6bNwqBCNll9M-Lj_OTiIiv8xPGkHIfKaFwb5SESb7ch2a7m4ZJjZZegXKM5Z2CSuUgV_Ai4pDG/s320/Picture+4.png" /></a></div><i>When I increased the number of kernels even further, to 300, this behavior became almost ubiquitous in the system. Kernels were popping out of the kettle left and right, and the walls of the kettle seemed almost meaningless.</i><br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiogFjRpCOtXLdnSLlTkNy31q3hKan5pDQ9fA2KAH9umwBInERXTgGhK-Skt0SvcpxWyj56CWuEn3byqxWEnIEtdRktxeVEWLGMEVO7EB9736eC7fygojt4U_L2DMsz5XB5kWtXW1-pweec/s1600-h/Picture+5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiogFjRpCOtXLdnSLlTkNy31q3hKan5pDQ9fA2KAH9umwBInERXTgGhK-Skt0SvcpxWyj56CWuEn3byqxWEnIEtdRktxeVEWLGMEVO7EB9736eC7fygojt4U_L2DMsz5XB5kWtXW1-pweec/s320/Picture+5.png" /></a></div><i>I have yet to discover what the cause of this behavior is, but my guess is that it's an inability of Box2D to handle the sudden transition from kernel to popped kernel. A few ideas I've had include increasing the time of the "pop" from one step of the physics engine to say, 3 or 4. This might allow the engine to handle the change more intuitively.</i><br />
<i><br />
</i><br />
<i>One of the challenges of Box 2D is that it has no graphical output, so everything you draw is based upon a Box2D object, but isn't actually a Box2D object. What this means is that errors in one's graphics code can appear to be Box2D errors, when in fact they are coding errors on the users end. This is unquestionably one of the downsides of using a "black box" engine, but hardly enough of a problem to avoid using such a versatile tool.</i><br />
<span style="font-size: x-small;"><i><b><br />
</b></i></span><br />
<span style="font-size: x-small;"><i><b>source: <a href="http://www.patrickproctor.com/sounds/ITP/popcornmodel.zip">Popcorn Model</a></b></i></span>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-51706916928110392282010-02-11T09:51:00.000-08:002010-02-11T13:33:26.664-08:00Sound And The City: Final Project Proposal - orchestraSeating<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6o4FoP_Yx4wE1QscYSweOzs1bhjtO53WphzHMrNZHR-z1kTl3zo9W1AWSB-hz0CA-KN2KPzxV8La99ocML0BQamEfwyr2-bGP9JWqRKhZz4aFaiv_VvVVfCR01alsGM5yTrTA0MGR8EW5/s1600-h/Slide1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6o4FoP_Yx4wE1QscYSweOzs1bhjtO53WphzHMrNZHR-z1kTl3zo9W1AWSB-hz0CA-KN2KPzxV8La99ocML0BQamEfwyr2-bGP9JWqRKhZz4aFaiv_VvVVfCR01alsGM5yTrTA0MGR8EW5/s400/Slide1.jpg" width="400" /></a></div>This week for Sound and The City, we presented our proposals for final projects. My project, orchestraSeating, is an installation piece that interactively deconstructs classical music scores. The Power Point presentation for my proposal is can be found below.<br />
<span style="font-size: x-small;"><i><b><br />
</b></i></span><br />
<span style="font-size: x-small;"><i><b><a href="http://www.patrickproctor.com/sounds/ITP/orchestraSeating%20Proposal.ppt">ppt: orchestraSeating Proposal</a></b></i></span>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-47438200002003590572010-02-09T07:59:00.000-08:002010-02-09T07:59:56.658-08:00Nature Of Code: Flower Modeling Mk. 3<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMVIezfmc_7zGaajStd4YSthwuyy6-omg-mIU58hlAVmdwQCLrCygy2DuGPt6jrD0lgpf9aolEJm6TNq4Pm8nWKtAjBHiExhHRCJeE_AvOCzjF9uvhZQZ7y7EqhsuVeYeQ2AgXFpNEPIEy/s1600-h/Picture+1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="235" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMVIezfmc_7zGaajStd4YSthwuyy6-omg-mIU58hlAVmdwQCLrCygy2DuGPt6jrD0lgpf9aolEJm6TNq4Pm8nWKtAjBHiExhHRCJeE_AvOCzjF9uvhZQZ7y7EqhsuVeYeQ2AgXFpNEPIEy/s400/Picture+1.png" width="400" /></a></div>The image above is a screen grab of the third revision of my flower-based physics model, last mentioned <a href="http://iamjacksgraduateeducation.blogspot.com/2010/01/nature-of-code-flowers-and-wind.html">here</a>. Since then, the model has gone through a number of iterations. The first modified the graphics to be slightly more refined, and more successfully integrated a wind force, as well as a flower "wobble". The initial issues I had with the wind being too uniform were resolved by putting a limit on the force vector, as opposed to the resulting velocity vector. The velocity limit had been causing all of the flowers to share velocity and direction. I intriduced the "wobble" in an effort to give each flower its own movement, in spite of the shared wind vectors.<br />
<br />
This revision, the third, takes the second revision and adds the concept of thermals. The thermals can be seen in the regions defined by white lines above. These thermals provide a third, largely upward, force that is defined by the developer. When blooms cross them, the addition of the thermals can result in spontaneous upward movement, ostensibly due to air current resulting in difference from temperature. From here, I think the next logical step is transferring the wind force from a single shared force screen-wide into an array of wind vectors, varying based on screen location.<br />
<br />
<span style="font-size: x-small;"><i><b>Code: <a href="http://www.patrickproctor.com/sounds/ITP/flowersmk3.zip">Flower Modeling Mk. 3</a></b></i></span>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-8966515625578722922010-02-04T07:01:00.000-08:002010-02-04T07:01:18.616-08:00Sound And The City: Soundwalk - "Imaginary Commute"<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeO4iYCO9Q6GQQaSflcy6IWYoGTaR7vrTIVua0G7zYyIJTmutbtT0VrWJYncoXaDEf3jkG3UYPLe2F9SH9bZCIumoo_7Y6R3oEPyy3-J3WkhgJ5hYLF803CVV8IcWoTUCksv_njBzUyP2w/s1600-h/hm_commuters.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="342" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeO4iYCO9Q6GQQaSflcy6IWYoGTaR7vrTIVua0G7zYyIJTmutbtT0VrWJYncoXaDEf3jkG3UYPLe2F9SH9bZCIumoo_7Y6R3oEPyy3-J3WkhgJ5hYLF803CVV8IcWoTUCksv_njBzUyP2w/s400/hm_commuters.jpg" width="400" /></a></div><br />
This week for <a href="http://www.soundcity.danielperlin.net/">Sound and The City</a>, we were asked to create a "soundwalk", whereby we would make a field recording of a walk through a given area, and narrate along. Since I live about 10 blocks from Penn Station, I decided to do an "imaginary commute" as though my home were an office, and I needed to catch a train at Penn to get home. I left my front door around five in the evening, and the result is below.<br />
<br />
<span style="font-size: x-small;"><i><b>audio: <a href="http://www.patrickproctor.com/sounds/ITP/soundwalk.m4a">Soundwalk - Imaginary Commute</a></b></i></span>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-46382657855064440042010-01-30T09:05:00.000-08:002010-01-30T09:05:37.913-08:00Little Computers: Some Interesting Flags For gcc<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9kEUCYOt1EUVeG7tdxOTAtvEo8PkYfLz2HDl7dDrp3kmexOM_VrZeb9WkydVckzYku0Nh9p6RGDXvehA-siDrEW808n0UIw3UYOc4DX4UxM__OlqI-tMUHv_BNJNReQj_qma_0czztdCO/s1600-h/gcc-400.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9kEUCYOt1EUVeG7tdxOTAtvEo8PkYfLz2HDl7dDrp3kmexOM_VrZeb9WkydVckzYku0Nh9p6RGDXvehA-siDrEW808n0UIw3UYOc4DX4UxM__OlqI-tMUHv_BNJNReQj_qma_0czztdCO/s320/gcc-400.png" /></a></div>This week in <a href="http://littlecomputers.net/2010/">Little Computers</a>, I was asked to put together a presentation on some "interesting flags" to <a href="http://gcc.gnu.org/">gcc</a>. You can check out the pdf of the presentation below!<br />
<br />
<span style="font-size: x-small;"><i><b>pdf: <a href="http://drop.io/r0nleks">"Some Interesting Flags For gcc"</a></b></i></span>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-46371087671412828302010-01-28T08:38:00.000-08:002010-01-28T08:38:16.501-08:00Sound And The City: Deep Listening<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3vVJ0yATTnKrhhDEz0UvMWHG2MaovoHxYAmO9Uf0188ZIGv-t-O15B43qHAVEyZY9eYecEQg-6Id2tPGf8hTBLjGURMYTefHfLluoRPD8BTJy1-dokcpH86khPkim97ca9n8H2IKI-i9m/s1600-h/deep.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3vVJ0yATTnKrhhDEz0UvMWHG2MaovoHxYAmO9Uf0188ZIGv-t-O15B43qHAVEyZY9eYecEQg-6Id2tPGf8hTBLjGURMYTefHfLluoRPD8BTJy1-dokcpH86khPkim97ca9n8H2IKI-i9m/s400/deep.jpg" width="357" /></a><br />
</div>This week for <a href="http://www.soundcity.danielperlin.net/">Sound And The City</a> we were asked to do a bit of "deep listening". This essentially consists of spending twenty minutes in a given environment, focusing entirely on the environmental sound in that area. This includes having your eyes closed, doing nothing else, and generally paying as much attention as possible. After the twenty minutes are up, one takes notes (of any kind they desire) on the experience, and the result is a "deep listening" log.<br />
<br />
Daniel (course instructor) asked us in class to name a place we loved in New York - I chose the west side highway park, where I run daily. The trick here was that this then became the site of our deep listening experience. My log from the experience is pictured above, while an mp3 of the same time period is below. Enjoy!<br />
<br />
<span style="font-size: x-small;"><i><b>mp3: <a href="http://www.patrickproctor.com/sounds/ITP/20100128%20101356.mp3">Deep Listening, West Side Highway, New York</a></b></i></span>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-38535344239674898042010-01-27T06:42:00.000-08:002010-01-27T06:42:25.778-08:00Pixel By Pixel: Analog Animation<object height="300" width="400"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=9018942&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=9018942&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="300"></embed></object><br />
For <a href="http://itp.nyu.edu/varwiki/Syllabus/Pixels-S10">Pixel By Pixel</a> we were asked to "make pixels" using an analog source for our pixel generation. I decided to use my shower tiles and dry erase markers to create a big-pixel scene. The scene (above) depicts a short span in the geological evolution of a land mass, including rain, erosion, volcanic activity, and growth.<br />
<br />
In hindsight, I would have zoomed out more in order to have a higher resolution grid. I also discovered that while dry erase markers come right off of tile, the same is not true of grout. You live, you learn.Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-66268510445274054072010-01-26T07:05:00.000-08:002010-01-26T07:05:29.564-08:00Nature Of Code: Flowers and Wind<object height="340" width="560"><param name="movie" value="http://www.youtube.com/v/RUC2tpY5gb4&hl=en_US&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/RUC2tpY5gb4&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="560" height="340"></embed></object><br />
For our first week in <a href="http://itp.nyu.edu/varwiki/Syllabus/Nature-of-Code-S10">Nature Of Code</a>, we were asked to do the following:<br />
<br />
<div style="text-align: center;"><span style="font-size: x-small;"><i>Find an example of real-world "natural" motion and develop a set of rules for moving the Walker. Can you do it without using any random whatsoever? Without changing how the square looks at all (changing size or rotation is ok), can you give it a personality or make it <i>appear</i> to have an emotional quality? Create a second version with the same behavior, but with your own non-square design. Feel free to design an environment for the Walker to live in as well. We'll compare the versions in class next week. Can we create something <i>natural</i> through algorithmic behaviors alone? How much does visual design play a part?</i></span><br />
</div><div style="text-align: center;"><br />
</div><div style="text-align: left;"><b>Inspiration.</b> <br />
I took my inspiration from the PlayStation 3 game <a href="http://thatgamecompany.com/games/flower/">Flower</a> (see video above), and attempted to model flowers (namely, dandelions) being detached and floating in the wind. As per the request, I created two versions, one that used pure geometrics, and one with more visualization.<br />
<br />
<b>Implementation. </b><br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjg_JOW9HUEuu5Yd72yWw00yCHVU9DaZqe_3EoKGYWdYfo64wyhnGszXoihf1HopLca0LtFLkBMigZjujgPnqJaeLSs2a03ieyCyEHTlgbDWfBKYKPGQqpTibpnZ_1RdRA8WOs018wCeAvq/s1600-h/Picture+1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="202" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjg_JOW9HUEuu5Yd72yWw00yCHVU9DaZqe_3EoKGYWdYfo64wyhnGszXoihf1HopLca0LtFLkBMigZjujgPnqJaeLSs2a03ieyCyEHTlgbDWfBKYKPGQqpTibpnZ_1RdRA8WOs018wCeAvq/s400/Picture+1.png" width="400" /></a><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjaAT6Q47uANaOs-7tjfKLA_XwO01v99pAJoEpAQOnNX4LBd_ZXJhb1FNsqpCR3U67f_zSnt85Ww0xOpPC8ts_5lgc8p1gxGQeFen8pSfbQklkria_jEhjwBwHajs5YAN2CJWPpMa9RQh7d/s1600-h/Picture+2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="317" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjaAT6Q47uANaOs-7tjfKLA_XwO01v99pAJoEpAQOnNX4LBd_ZXJhb1FNsqpCR3U67f_zSnt85Ww0xOpPC8ts_5lgc8p1gxGQeFen8pSfbQklkria_jEhjwBwHajs5YAN2CJWPpMa9RQh7d/s400/Picture+2.png" width="400" /></a><br />
</div>The first visualization uses squares and lines (along with modeled wind) to gradually harvest the dandelion buds from their stems. While the representation is geometric, I actually think the visualization has enough clarity, or familiar layout, that it's pretty clear what's happening.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-PKJEFfJbiwhUoYt79gYjaW648DgvgV6772FJ67NhihHxjmOu9XufBbcoVK4n80V-vgE1vrH-cG6LtLj1Ka0zhdyZ8ZJwReI6VXVIRB8umpPFwml5kNoMAgWGBrdLxyjLokydtdZiMxV3/s1600-h/Picture+3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="203" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-PKJEFfJbiwhUoYt79gYjaW648DgvgV6772FJ67NhihHxjmOu9XufBbcoVK4n80V-vgE1vrH-cG6LtLj1Ka0zhdyZ8ZJwReI6VXVIRB8umpPFwml5kNoMAgWGBrdLxyjLokydtdZiMxV3/s400/Picture+3.png" width="400" /></a><br />
</div><div class="separator" style="clear: both; text-align: center;"> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgP0s00q8zIpGP2ivifw6TrBj1EXn4q05nV5xStEH0xvQDktnOL_mKuvgZasNSPbuWJqKpiDB_DHeqqMszV8tVqHYcg1khGzutaPefaLyLiGyqwsA2YvbMJc2bhs-AUriEZ8fejDBX4kGr7/s1600-h/Picture+4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="249" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgP0s00q8zIpGP2ivifw6TrBj1EXn4q05nV5xStEH0xvQDktnOL_mKuvgZasNSPbuWJqKpiDB_DHeqqMszV8tVqHYcg1khGzutaPefaLyLiGyqwsA2YvbMJc2bhs-AUriEZ8fejDBX4kGr7/s400/Picture+4.png" width="400" /></a><br />
</div><div class="separator" style="clear: both; text-align: center;"> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmJbwHryTZU4fjte2Sw0dCIX7dugrUrsfd5zz06fNVpHwZ2XlVLngn6DZNJRLpVjmyqL7KobC5RNPGp_j05kYZNFjF6QnE3c69B0u1bkmYdJJZo6FkQ4KkZBx3Ok5TqCdzID0YIfpABo4a/s1600-h/Picture+5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmJbwHryTZU4fjte2Sw0dCIX7dugrUrsfd5zz06fNVpHwZ2XlVLngn6DZNJRLpVjmyqL7KobC5RNPGp_j05kYZNFjF6QnE3c69B0u1bkmYdJJZo6FkQ4KkZBx3Ok5TqCdzID0YIfpABo4a/s400/Picture+5.png" width="271" /></a><br />
</div>The second animation uses the same code, but with color and small dandelion gifs to create a better sense of what's going on. While the original visualization creates a semi-clear representation, the second implementation unquestionably does a slightly better job. The colors and gifs create and animated, cartoony feel that successfully portrays the objects in question.<br />
<br />
<b>Challenges.</b><br />
Both implementations use the same modeling for the wind, which uses vectors and random numbers to specify when buds disconnect, and how they move once disconnected. Coding the disconnect was simply a matter of randomly disconnecting at a pace that was seldom enough to feel natural.<br />
<br />
The wind was far more of a challenge, and required figuring a pace of wind change that didn't seem too fast, or too slow. I eventually managed to stumble upon a combination of changing acceleration, changing direction, and avoidance of downward wind, that seemed to grasp the idea well.<br />
<br />
My main challenge (which I still have yet to solve) is that the iteration over the flower array to update each vector results in an occasional pause in the motion which is highly unnatural, and definitely not desirable. I need to investigate this further, and determine if there's a more efficient way to update the objects, or if I need to reduce something algorithmically within the update.<br />
<br />
<b>Takeaway.</b><br />
Short version: making things move naturally is hard.<br />
<br />
<b> </b>Long version: I had difficulty implementing any kind of movement that felt natural without using randoms. Even once I started using randoms, the implementation was largely one of trial and error, seeing what looked and "felt" natural, and elaborating on that.<br />
<br />
<i><b>Source Code: <a href="http://www.patrickproctor.com/sounds/ITP/noc-wk1-src.zip">Zip File with Source Code and gif</a></b></i> (Please note that the sketch is done in eclipse, and will not work in a regular Processing environment.)<br />
</div>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-87953947095488550962009-12-10T08:23:00.000-08:002009-12-10T08:23:23.249-08:00Visualizing Data: On Paul Graham<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3k3MVH6neMcUE6PZyQITSH473KoBwO587gig1IfHnUYBGeGvzSSFIOaJfmwd_URFeNJvztveNWJZyVwvxx2Q1ikIwaaR3XKq6aO_LbGjo_lqIzQhp0fluDar526JYFZOPmMRdLXluYj4A/s1600-h/paulgraham_2082_374360.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3k3MVH6neMcUE6PZyQITSH473KoBwO587gig1IfHnUYBGeGvzSSFIOaJfmwd_URFeNJvztveNWJZyVwvxx2Q1ikIwaaR3XKq6aO_LbGjo_lqIzQhp0fluDar526JYFZOPmMRdLXluYj4A/s320/paulgraham_2082_374360.jpg" /></a><br />
</div><i>Being that I've seen <a href="http://paulgraham.com/">Paul Graham</a> speak, have read <a href="http://paulgraham.com/hackpaint.html">his book</a>, and am partial to <a href="http://paulgraham.com/articles.html">his essays</a>, the talks delivered for Visualizing Data (see below) didn't present a ton of new information. That being said, it's worth taking a moment to discuss why it is that I enjoy Graham so much, and his most famous analogy between "hackers" and "painters".</i><br />
<br />
Graham's style of speaking and writing is unquestionably authoritarian, and that's the beginning of what I like about him: he's not afraid to have strong opinions, and to put them out there. So many commentators today waste their time either pandering to the masses, or being incredibly extreme. With Graham, you get the feeling that not only does he believe in what he's saying, but that he's given it some real thought. Even his most caustic opinions (for example, his constant mocking of the Java programming language) are rooted in well thought out and valid positions.<br />
<br />
Most of those positions end up being about one of three things: smart people, hackers, or programming. Which brings me to the second thing I like about Graham: he's not afraid to admit that there are smart people out there in the world, and that they behave differently than others. He's willing to cite the good (high productivity, more inspiration) and the bad (stubbornness, near autistic behavior), but most importantly he's willing to admit that they're smart. These days we're far too bogged down in a culture where everyone's getting a pat on the head, and Graham is far more inclined to give the truth than to put a rosy tint on everything.<br />
<br />
When discussing these super-intelligent "hackers", Graham then takes a stance that (as least when he originally took it) is unique: he treats them as people and creators. Computer programming has long been the subject of being compared to engineering and math, as a sort of technical discipline. Graham takes his unique role as both an artist and a programmer and proposes the opposite: that programmers (or "hackers") are actually creative people who simply use an engineering device and medium as their means of expression.<br />
<br />
This treatment culminates in Graham's famous analogy between hackers and painters. The two groups are unique to each other, Graham supposes, in that they both have two roles: they have to decide <i>what </i>to do, and <i>how </i>to do it. While many other creative/engineering jobs have two roles for this action (he cites architects and engineers as the "what" and "how", respectively), Graham points out that both painters and hackers are responsible for creating their idea, and then engineering it as well.<br />
<i> </i><a href="http://www.blogger.com/goog_1260459171366"><br />
</a><br />
<a href="http://itc.conversationsnetwork.org/shows/detail188.html"><i> <b>Paul Graham: Great Hackers</b></i></a><br />
<a href="http://itc.conversationsnetwork.org/shows/detail164.htm"><i><b>Paul Graham: Hackers And Painters</b></i></a>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-42649063131427092542009-12-09T20:29:00.000-08:002009-12-09T20:29:34.969-08:00Awesomeness From Applications ClassA while back in applications class, a group had us do this awesome stop motion pong game. Truly killer - just found the vid. Enjoy!<br />
<object width="400" height="300"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=7460178&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=7460178&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="300"></embed></object>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-47959588714855179302009-12-01T11:43:00.000-08:002009-12-01T11:43:23.255-08:00Physical Computing Midterm: Media Controller; "Pancake"<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuBYOjnmLq6fn5rViXxBzA6bS9KNQewWnGrt_pI_JkyuRl9WU_KeE3xCOojGZ9VwN_53YMWZ1BnydIBjwu7gFDc_iPu3dqN4MvQgC9-T_DaE6AuDx2MV6s75nL9_wiqSE8ipc9nLCE9MnR/s1600/IMG_0070.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuBYOjnmLq6fn5rViXxBzA6bS9KNQewWnGrt_pI_JkyuRl9WU_KeE3xCOojGZ9VwN_53YMWZ1BnydIBjwu7gFDc_iPu3dqN4MvQgC9-T_DaE6AuDx2MV6s75nL9_wiqSE8ipc9nLCE9MnR/s400/IMG_0070.JPG" /></a><br />
</div><div style="text-align: center;"><i>The "Pancake"</i><br />
</div><div style="text-align: center;"><i> <br />
</i><br />
</div><i>Again, a bit late on the delivery of this one. Apologies to those waiting with bated breath....</i><br />
<br />
For our Physical Computing midterm, we were split into groups and asked to create a media controller of our own devising. No limits or requirements were put in place, except that the controller would be a physical interface to the arduino, and would controller some external media device. I was teamed with the wonderful <a href="http://avoidobvious.com/blog/?cat=3">Amy Chien</a> and <a href="http://itp.nyu.edu/%7Ecja255/blog/?cat=3">Chris Alden</a>, and the three of us came to the conclusion that we'd like to build something of an electronic musical instrument.<br />
<br />
To start things off, we brainstormed about possible ideas for the media controller's interface. However, we came up with so many cool ideas that we almost immediately decided that we'd like to do multiple interfaces. This worked its way into a concept for a modular music table, that would be split into various pieces, and allow three individuals to collaborate using three different interfaces. <br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuRcWwigladt4aSXv0vBICQu6oLmcoM52ACNSX76Dq7k9DkxfHd_GTiptNSYBCDavpvQGhIlYRoPUE9xToEaJBmgpCqkmEG4k5wkf5MzphL1TF8YJcbvs69p9ITB2UqTo7K5gCQ1S9YCB2/s1600/IMG_0003.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuRcWwigladt4aSXv0vBICQu6oLmcoM52ACNSX76Dq7k9DkxfHd_GTiptNSYBCDavpvQGhIlYRoPUE9xToEaJBmgpCqkmEG4k5wkf5MzphL1TF8YJcbvs69p9ITB2UqTo7K5gCQ1S9YCB2/s400/IMG_0003.JPG" /></a><br />
</div><div style="text-align: center;"><i>Hauling wood.</i><br />
</div><div style="text-align: center;"> <br />
</div>The first step we took was to construct the table surface. We procured two large pieces of ply wood, and cut them into identical circles. One circle was the table itself, while the other circle was cut into pieces for us to build the interface modules. Each of us took on one of the modules, and each module used a different physical interface. The three interfaces were wind based, pressure based, and a conductive sequencer.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgM5uylE5oPWrsk5SWkSpwgvbHYefgIvKbYwDzJf6VkEHi4fq-gwZioAAAvhLwXSMqonxEw6fLQn7VQT7YkeylBViRNXN17JRtobh20rDuJ0FWwObsdb3GmPLJA0MkP9zNAXZdBdidt6YdW/s1600/IMG_0062.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgM5uylE5oPWrsk5SWkSpwgvbHYefgIvKbYwDzJf6VkEHi4fq-gwZioAAAvhLwXSMqonxEw6fLQn7VQT7YkeylBViRNXN17JRtobh20rDuJ0FWwObsdb3GmPLJA0MkP9zNAXZdBdidt6YdW/s400/IMG_0062.JPG" /></a><br />
</div><div style="text-align: center;"><i>All three interfaces. </i><br />
</div><br />
Once we had completed and tested the three interfaces, we brought them together to the table, and unified them into a single instrument. Each interface had its own arduino, which were then collectively wired into a labtop via a USB hub. Once all three arduinos were recognized on the computer, we then hooked them up to Max/MSP and created a patch that would listen for the serial values from each interface, and use them to control tonal output.<br />
<br />
<object width="400" height="300"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=7504190&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=7504190&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="300"></embed></object><br />
The pressure pad controlled pitch bending, the wind interfaces controlled the speed of the tone playback, and the sequencer controlled the notes being played over the loop. Once the patch was loaded, all three interfaces could be used simultaneously to control the computer's sonic output. You can see a demo of one of our classmates joining us in trying out the interface above.<br />
<br />
<object width="400" height="300"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=7504025&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=7504025&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="300"></embed></object><br />
Finally, we presented our project in class. Here you can see a brief clip of our presentation, which managed to go off without a hitch. Midterm, complete. Go Pancake!Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-51897561284786517152009-11-23T20:38:00.000-08:002009-11-23T20:38:54.306-08:00Physical Computing Week Eight Lab: Transistors and H-Bridge<i>Again, a bit late with this one, but better late than never. In week eight of Physical Computing, we investigated the use of two slightly more complex devices: the transistor and the h-bridge.</i><br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSok0OoF2FPEsr-guTPN6V1Z-Hq7PseAJ46oT4W4huar1irWZvpT9tDgLh92iu3_Jh6MsfATpwnW6nsbpUeGDb4aIA6DaGgq1VPXQ8hoK_7Im0-CQk5CGB6tJieS-QW5TQvWmgYL7XX-p_/s1600/P1020194.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSok0OoF2FPEsr-guTPN6V1Z-Hq7PseAJ46oT4W4huar1irWZvpT9tDgLh92iu3_Jh6MsfATpwnW6nsbpUeGDb4aIA6DaGgq1VPXQ8hoK_7Im0-CQk5CGB6tJieS-QW5TQvWmgYL7XX-p_/s400/P1020194.JPG" /></a><br />
</div><div class="separator" style="clear: both; text-align: center;"><br />
</div><i> The transistor lab consisted of attaching a transistor to a small motor, and controlling the voltage output to the motor via the transistor. This differs from a typical input/output in that the transistor can accept a far higher voltage than the arduino microcontroller's 5 volt power supply. As such, the arduino can still be used to control a device that requires a much higher voltage. The circuit with the motor can be seen above, while a video of the on/off control can be seen below.</i><br />
<br />
<div style="text-align: center;"><object height="300" width="400"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=7788915&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=7788915&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="300"></embed></object><br />
</div><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-OTFWvggMzVxxm98HiJNU3p6-HXM4jcW83sAJsBLEfPocYgqK1MEs-o9fSTTxVpHAMsOxrbCbzrgM0CSnYWxOWSQNQKv1VrBSngnEC7utuJ2M-mzoBM_Gm0w_V3PjfuOV2pCE7upBPAxT/s1600/P1020199.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-OTFWvggMzVxxm98HiJNU3p6-HXM4jcW83sAJsBLEfPocYgqK1MEs-o9fSTTxVpHAMsOxrbCbzrgM0CSnYWxOWSQNQKv1VrBSngnEC7utuJ2M-mzoBM_Gm0w_V3PjfuOV2pCE7upBPAxT/s400/P1020199.JPG" /></a><br />
</div><i>The h-bridge lab consisted of using an integrated circuit known as an h-bridge to control the direction of current. Put differently: the motor from the first lab will run in different directions, depending on which way it is wired in the circuit. The h-bridge allows us to select which direction of current we prefer, thus allowing for a single wiring scheme for the motor, but allowing us to decide (via a switch) which way we'd like the motor to rotate. You can see a photo of the circuit above (the h-bridge is in the center), with a video of the bi-directional motor control below.</i><br />
<br />
<div style="text-align: center;"><object height="300" width="400"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=7788950&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=7788950&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="300"></embed></object><br />
</div>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-16027837201980112142009-11-23T20:04:00.000-08:002009-11-23T20:04:33.138-08:00Visualizing Data: On Jonathan Harris<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpojj2FHAOdicOLVTs_5_gAKAzsR-7vfVqO5VjL9oqh5b2XnVLLlss9nEHyPlAnAvlMcYmKQCAJ_vVJwuLdMSrQP0ZYQxVmcjbmZMcbet4pFWYeuVIg6mxeNejYLWi8dOIGTbtkPLAJjGd/s1600/wffbook.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpojj2FHAOdicOLVTs_5_gAKAzsR-7vfVqO5VjL9oqh5b2XnVLLlss9nEHyPlAnAvlMcYmKQCAJ_vVJwuLdMSrQP0ZYQxVmcjbmZMcbet4pFWYeuVIg6mxeNejYLWi8dOIGTbtkPLAJjGd/s320/wffbook.jpg" /></a><br />
</div><i> While I'm a week or two late in posting, here are some thoughts on <a href="http://www.number27.org/">Jonathan Harris</a>, complete with prompts from the Visualizing Data blog...</i><br />
<i><br />
</i><br />
<i><b>Do you find his pieces effective? </b></i><br />
<i>Harris seems to desire an emotive, human aspect to his work, and in that sense I would say that his pieces are extremely effective. He manages to create both visuals and text streams that manage to convey a good sense of emotion and the human element. Part of this is rooted in his use of live data sets, that add an immediacy and reality to his work. The randomness of the imagery also serves to deliver a feeling of humanity, as it creates a constant and undefinable imperfection to the work.<br />
</i><br />
<i><br />
</i><br />
<i><b>What might you change if it were your project? </b></i><br />
<i>I feel as though I might use slightly less saccharine visuals. While I feel that Harris' visuals are extremely effective, they have a certain pastel, Hallmark quality to them that doesn't quite appeal to me. <br />
</i><br />
<br />
<i><b>What tools (color, motion, etc.) does Jonathan employ to express emotive qualities in his work? </b></i><br />
<i>Harris uses motion almost constantly in his work to relay a feeling of "nowness". The constant movement creates an unavoidable sense that the dialogue is occurring as you sit there watching it. He also uses pastel colors (presumably for their "emotive" feel), but as mentioned above, this really doesn't appeal to me. Even in a site that riffs on the terrorism threat level, Harris still resorts to almost-pastels.<br />
</i><br />
<br />
<i><b>What makes his body of work feel different than Karsten’s or Aaron’s?</b></i><br />
<i>Most significantly for me, it's the use of live data. Karsten and Aaron both obtain data sets, and then create a deliberate and planned visual for them. Harris' creation of a more data "engine" allows him to use live data off the web, and create an immediacy and reality to his work that the others lack.<b> <br />
</b></i>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0tag:blogger.com,1999:blog-5985145776529519536.post-76155078416090439112009-11-05T08:21:00.000-08:002009-11-05T08:21:34.742-08:00Visualizing Data: On Edward Tufte<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjatw7vR5ocTTDspsS4F5tY19bznPTyIOcSdMK0Kl2x32ql9mIXktV7kTFZ5CyoyNP9N-mO82Zcu5YxRifHbCesWtERJysMLazdOyTLnJ_wL_aC5ZlEvocGXlCI51dtKAk3zl_V-b2InVVo/s1600-h/TUFTE.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjatw7vR5ocTTDspsS4F5tY19bznPTyIOcSdMK0Kl2x32ql9mIXktV7kTFZ5CyoyNP9N-mO82Zcu5YxRifHbCesWtERJysMLazdOyTLnJ_wL_aC5ZlEvocGXlCI51dtKAk3zl_V-b2InVVo/s320/TUFTE.jpg" /></a><br />
</div><i> This week in Visualizing Data, we were asked to explore an delve into the work of <a href="http://www.edwardtufte.com/tufte/">Edward Tufte</a>, specifically with regard to these two videos: <a href="http://www.youtube.com/watch?v=YslQ2625TR4">1</a>, <a href="http://visualthinkmap.ning.com/video/edward-tufte-interview">2</a>. We were then asked to explore a set of questions, as follows:</i><br />
<i> </i><br />
<b><i>With Tufte’s close examination of the iPhone, did you find yourself alerted to interface elements you were aware of but hadn’t paid attention to? </i></b><br />
<i>Maybe it's because I've had an iPhone for going on two years, or maybe because I've developed for it, but I didn't particularly find myself taken by surprise by any of the elements illustrated. <br />
</i><br />
<br />
<b><i>Does Tufte make assertions that you disagree with? (Choose a specific example and explain.) </i></b><br />
<i>While I agree with Tufte's quest for more granularity on the weather page, I disagree with his similar assessment of the stock market page. While granular weather data (and a map) is something that carries weight for everyone, I think that most people, when checking their stocks, are looking for an extremely high level "what's the Dow" type of insight. Tufte's stock graphic contained far too much information, and wasn't in the spirit of what the iPhone provides: on the go data. If I needed to go dissect 12 months of stock data, I wouldn't be picking up my iPhone; I'd be sitting down at a desk.<br />
</i><br />
<br />
<b><i>Where does Tufte think the best visualizations of today are published? </i></b><br />
<i>Tufte expresses that the best visualizations come from those with extensive quantitative skills. Specifically, he cites the "rock stars" of scientific journals, namely <a href="http://www.sciencemag.org/">Science</a> and <a href="http://www.nature.com/nature/index.html">Nature</a>. <br />
</i><br />
<br />
<b><i>What’s his logic for this conclusion? </i></b><br />
<i> His logic is based in the fact that the individuals producing these articles are extremely bright, have large data sets, and are offered limited space for their publications. The result is a necessity to design high efficiency, extremely dense data presentations.<br />
</i><br />
<br />
<b><i>In general who does he see as the creators of great data visualizations? Scientists? Graphic Artists? Programmers? </i></b><br />
<i>While he doesn't express a completely final opinion, it seems that Tufte has high regard for the actual producers of the data, who have a deep level of understanding for it. In his description, he seems to focus most on scientists, while at the same time noting that certain people may need to assistance or hand holding of a graphic designer or artists.</i><br />
<br />
<b><i>What’s your own opinion, and what do you consider your label or role to be? </i></b><br />
<i> I think it's extremely difficult to make conclusions that are quite as decisive as Tufte's. He seems to have relatively black and white opinions about the topic, and I actually feel that as time progresses that those who embrace multiple disciplines are those that will garner the most success. As such, this is what I'm trying to do personally, and much of the reason I'm at ITP. I already have an extensive technical skill set, but I want to augment that with other skills and insights, to ultimately yield a wider breadth of understanding.<br />
</i><br />
<br />
<b><i>How might an “anti-social network” function?</i></b><br />
<i>I think about this quite a bit, because socialization can be so life-dominating, that it almost seems like anti-socialization is going to become a useful and necessary tool. Most obviously, an anti-social network might simply limit your media access to things that you needed to focus on, and keep the rest of the world at bay. However, it could also do things to stratify people based on what they didn't like, and essentially do the reverse of all the attempted "matching" of similar interests that goes on today.</i>Patrickhttp://www.blogger.com/profile/03497220940285089914noreply@blogger.com0