Being that I've seen Paul Graham speak, have read his book, and am partial to his essays, the talks delivered for Visualizing Data (see below) didn't present a ton of new information. That being said, it's worth taking a moment to discuss why it is that I enjoy Graham so much, and his most famous analogy between "hackers" and "painters".
Graham's style of speaking and writing is unquestionably authoritarian, and that's the beginning of what I like about him: he's not afraid to have strong opinions, and to put them out there. So many commentators today waste their time either pandering to the masses, or being incredibly extreme. With Graham, you get the feeling that not only does he believe in what he's saying, but that he's given it some real thought. Even his most caustic opinions (for example, his constant mocking of the Java programming language) are rooted in well thought out and valid positions.
Most of those positions end up being about one of three things: smart people, hackers, or programming. Which brings me to the second thing I like about Graham: he's not afraid to admit that there are smart people out there in the world, and that they behave differently than others. He's willing to cite the good (high productivity, more inspiration) and the bad (stubbornness, near autistic behavior), but most importantly he's willing to admit that they're smart. These days we're far too bogged down in a culture where everyone's getting a pat on the head, and Graham is far more inclined to give the truth than to put a rosy tint on everything.
When discussing these super-intelligent "hackers", Graham then takes a stance that (as least when he originally took it) is unique: he treats them as people and creators. Computer programming has long been the subject of being compared to engineering and math, as a sort of technical discipline. Graham takes his unique role as both an artist and a programmer and proposes the opposite: that programmers (or "hackers") are actually creative people who simply use an engineering device and medium as their means of expression.
This treatment culminates in Graham's famous analogy between hackers and painters. The two groups are unique to each other, Graham supposes, in that they both have two roles: they have to decide what to do, and how to do it. While many other creative/engineering jobs have two roles for this action (he cites architects and engineers as the "what" and "how", respectively), Graham points out that both painters and hackers are responsible for creating their idea, and then engineering it as well.
Paul Graham: Great Hackers
Paul Graham: Hackers And Painters
Showing posts with label visualizing data. Show all posts
Showing posts with label visualizing data. Show all posts
Thursday, December 10, 2009
Monday, November 23, 2009
Visualizing Data: On Jonathan Harris
While I'm a week or two late in posting, here are some thoughts on Jonathan Harris, complete with prompts from the Visualizing Data blog...
Do you find his pieces effective?
Harris seems to desire an emotive, human aspect to his work, and in that sense I would say that his pieces are extremely effective. He manages to create both visuals and text streams that manage to convey a good sense of emotion and the human element. Part of this is rooted in his use of live data sets, that add an immediacy and reality to his work. The randomness of the imagery also serves to deliver a feeling of humanity, as it creates a constant and undefinable imperfection to the work.
What might you change if it were your project?
I feel as though I might use slightly less saccharine visuals. While I feel that Harris' visuals are extremely effective, they have a certain pastel, Hallmark quality to them that doesn't quite appeal to me.
What tools (color, motion, etc.) does Jonathan employ to express emotive qualities in his work?
Harris uses motion almost constantly in his work to relay a feeling of "nowness". The constant movement creates an unavoidable sense that the dialogue is occurring as you sit there watching it. He also uses pastel colors (presumably for their "emotive" feel), but as mentioned above, this really doesn't appeal to me. Even in a site that riffs on the terrorism threat level, Harris still resorts to almost-pastels.
What makes his body of work feel different than Karsten’s or Aaron’s?
Most significantly for me, it's the use of live data. Karsten and Aaron both obtain data sets, and then create a deliberate and planned visual for them. Harris' creation of a more data "engine" allows him to use live data off the web, and create an immediacy and reality to his work that the others lack.
Do you find his pieces effective?
Harris seems to desire an emotive, human aspect to his work, and in that sense I would say that his pieces are extremely effective. He manages to create both visuals and text streams that manage to convey a good sense of emotion and the human element. Part of this is rooted in his use of live data sets, that add an immediacy and reality to his work. The randomness of the imagery also serves to deliver a feeling of humanity, as it creates a constant and undefinable imperfection to the work.
What might you change if it were your project?
I feel as though I might use slightly less saccharine visuals. While I feel that Harris' visuals are extremely effective, they have a certain pastel, Hallmark quality to them that doesn't quite appeal to me.
What tools (color, motion, etc.) does Jonathan employ to express emotive qualities in his work?
Harris uses motion almost constantly in his work to relay a feeling of "nowness". The constant movement creates an unavoidable sense that the dialogue is occurring as you sit there watching it. He also uses pastel colors (presumably for their "emotive" feel), but as mentioned above, this really doesn't appeal to me. Even in a site that riffs on the terrorism threat level, Harris still resorts to almost-pastels.
What makes his body of work feel different than Karsten’s or Aaron’s?
Most significantly for me, it's the use of live data. Karsten and Aaron both obtain data sets, and then create a deliberate and planned visual for them. Harris' creation of a more data "engine" allows him to use live data off the web, and create an immediacy and reality to his work that the others lack.
Thursday, November 5, 2009
Visualizing Data: On Edward Tufte
This week in Visualizing Data, we were asked to explore an delve into the work of Edward Tufte, specifically with regard to these two videos: 1, 2. We were then asked to explore a set of questions, as follows:
With Tufte’s close examination of the iPhone, did you find yourself alerted to interface elements you were aware of but hadn’t paid attention to?
Maybe it's because I've had an iPhone for going on two years, or maybe because I've developed for it, but I didn't particularly find myself taken by surprise by any of the elements illustrated.
Does Tufte make assertions that you disagree with? (Choose a specific example and explain.)
While I agree with Tufte's quest for more granularity on the weather page, I disagree with his similar assessment of the stock market page. While granular weather data (and a map) is something that carries weight for everyone, I think that most people, when checking their stocks, are looking for an extremely high level "what's the Dow" type of insight. Tufte's stock graphic contained far too much information, and wasn't in the spirit of what the iPhone provides: on the go data. If I needed to go dissect 12 months of stock data, I wouldn't be picking up my iPhone; I'd be sitting down at a desk.
Where does Tufte think the best visualizations of today are published?
Tufte expresses that the best visualizations come from those with extensive quantitative skills. Specifically, he cites the "rock stars" of scientific journals, namely Science and Nature.
What’s his logic for this conclusion?
His logic is based in the fact that the individuals producing these articles are extremely bright, have large data sets, and are offered limited space for their publications. The result is a necessity to design high efficiency, extremely dense data presentations.
In general who does he see as the creators of great data visualizations? Scientists? Graphic Artists? Programmers?
While he doesn't express a completely final opinion, it seems that Tufte has high regard for the actual producers of the data, who have a deep level of understanding for it. In his description, he seems to focus most on scientists, while at the same time noting that certain people may need to assistance or hand holding of a graphic designer or artists.
What’s your own opinion, and what do you consider your label or role to be?
I think it's extremely difficult to make conclusions that are quite as decisive as Tufte's. He seems to have relatively black and white opinions about the topic, and I actually feel that as time progresses that those who embrace multiple disciplines are those that will garner the most success. As such, this is what I'm trying to do personally, and much of the reason I'm at ITP. I already have an extensive technical skill set, but I want to augment that with other skills and insights, to ultimately yield a wider breadth of understanding.
How might an “anti-social network” function?
I think about this quite a bit, because socialization can be so life-dominating, that it almost seems like anti-socialization is going to become a useful and necessary tool. Most obviously, an anti-social network might simply limit your media access to things that you needed to focus on, and keep the rest of the world at bay. However, it could also do things to stratify people based on what they didn't like, and essentially do the reverse of all the attempted "matching" of similar interests that goes on today.
With Tufte’s close examination of the iPhone, did you find yourself alerted to interface elements you were aware of but hadn’t paid attention to?
Maybe it's because I've had an iPhone for going on two years, or maybe because I've developed for it, but I didn't particularly find myself taken by surprise by any of the elements illustrated.
Does Tufte make assertions that you disagree with? (Choose a specific example and explain.)
While I agree with Tufte's quest for more granularity on the weather page, I disagree with his similar assessment of the stock market page. While granular weather data (and a map) is something that carries weight for everyone, I think that most people, when checking their stocks, are looking for an extremely high level "what's the Dow" type of insight. Tufte's stock graphic contained far too much information, and wasn't in the spirit of what the iPhone provides: on the go data. If I needed to go dissect 12 months of stock data, I wouldn't be picking up my iPhone; I'd be sitting down at a desk.
Where does Tufte think the best visualizations of today are published?
Tufte expresses that the best visualizations come from those with extensive quantitative skills. Specifically, he cites the "rock stars" of scientific journals, namely Science and Nature.
What’s his logic for this conclusion?
His logic is based in the fact that the individuals producing these articles are extremely bright, have large data sets, and are offered limited space for their publications. The result is a necessity to design high efficiency, extremely dense data presentations.
In general who does he see as the creators of great data visualizations? Scientists? Graphic Artists? Programmers?
While he doesn't express a completely final opinion, it seems that Tufte has high regard for the actual producers of the data, who have a deep level of understanding for it. In his description, he seems to focus most on scientists, while at the same time noting that certain people may need to assistance or hand holding of a graphic designer or artists.
What’s your own opinion, and what do you consider your label or role to be?
I think it's extremely difficult to make conclusions that are quite as decisive as Tufte's. He seems to have relatively black and white opinions about the topic, and I actually feel that as time progresses that those who embrace multiple disciplines are those that will garner the most success. As such, this is what I'm trying to do personally, and much of the reason I'm at ITP. I already have an extensive technical skill set, but I want to augment that with other skills and insights, to ultimately yield a wider breadth of understanding.
How might an “anti-social network” function?
I think about this quite a bit, because socialization can be so life-dominating, that it almost seems like anti-socialization is going to become a useful and necessary tool. Most obviously, an anti-social network might simply limit your media access to things that you needed to focus on, and keep the rest of the world at bay. However, it could also do things to stratify people based on what they didn't like, and essentially do the reverse of all the attempted "matching" of similar interests that goes on today.
Wednesday, October 28, 2009
Visualizing Data: Reaction To Karsten Schmidt
Karsten Schmidt describes himself as a "Computational Designer", and the description is apt: most of his design projects are driven by code and generative algorithms. In stark contrast to Aaron Koblin, who uses small amounts of data from a wide range of individuals, Schmidt uses programatic iterations that generate similarly unique data sets.
The generative nature of Schmidt's work definitely separates it from Koblin's more "techy" work: While Koblin's pieces simply use data sets with modern visualizations, Schmidt's generate their own data. In the end, this results in the pieces (which could potentially be seen as more robotic or inhuman) actually having more in common with Koblin's human created data sets.
This conclusion about Schmidt's work brings to mind a number of interesting questions in the area of data, technology, and intelligence. Specifically: what is interesting about data, and where are its most interesting sources? Moreover, if data is real or generated, does that make a difference, and what is it about the presentation that gives it a more human feel?
To my mind, Schmidt's work unquestioningly draws attention to the fact that artificial, generated data can be every bit as human as real data sets, perhaps moreso. Considering it further, this may be a result of the fact that generative data is "growing" in much the same way a group of humans "grow" a widely dispersed data set. In the end, perhaps it is the life of the data, rather than its end points, that truly define how it is perceived.
The generative nature of Schmidt's work definitely separates it from Koblin's more "techy" work: While Koblin's pieces simply use data sets with modern visualizations, Schmidt's generate their own data. In the end, this results in the pieces (which could potentially be seen as more robotic or inhuman) actually having more in common with Koblin's human created data sets.
This conclusion about Schmidt's work brings to mind a number of interesting questions in the area of data, technology, and intelligence. Specifically: what is interesting about data, and where are its most interesting sources? Moreover, if data is real or generated, does that make a difference, and what is it about the presentation that gives it a more human feel?
To my mind, Schmidt's work unquestioningly draws attention to the fact that artificial, generated data can be every bit as human as real data sets, perhaps moreso. Considering it further, this may be a result of the fact that generative data is "growing" in much the same way a group of humans "grow" a widely dispersed data set. In the end, perhaps it is the life of the data, rather than its end points, that truly define how it is perceived.
Visualizing Data: Reaction To Aaron Koblin
In looking at Aaron Koblin's work, the pieces seem to be divided into two categories: those that use amazon's mechanical turk to generate data, and those that simply create visualizations of large data sets. While the visualizations certainly have their appeal, I have to say that I prefer the amazon mechanical turk projects.
The mechanical turk generated data sets not only provide novel visualizations, but also novel ways of generating the data that led to them. Seeing how data sets that are created at a micro level are still inexact is an interesting analogy for how larger projects can have an inexact nature to them. What's more, Koblin's visuals are compelling and unexpected.
By contrast, the visualizations that depend on external data seem to suffer from a forced feel of trying to hard to be futuristic or "different", but in fact ending up being cliche. The idea of mapping flight patterns or telephone lines has been done a million times, while the "House Of Cards" video is utterly reminiscent of needle pads that fit-form to represent shapes.
Out of all of the pieces, my favorite is probably the sheep market: the representations are novel and humorous, the data collection interesting, and the representation enjoyable to navigate. In short, it presents some serious concepts about data generation in the modern world while at the same time giving them a humanistic feel.
If anything, that is the shortcoming in Koblin's less enticing work: the absence of humanity and a feeling of overly-conscious attempts to be futuristic and technologically advanced. Part of this reaction to Koblin's work is probably driven by over exposure to faux-futurist imagery, but it's also a result of the fact that Koblin's more human works are simply more novel and easier to relate to.
The mechanical turk generated data sets not only provide novel visualizations, but also novel ways of generating the data that led to them. Seeing how data sets that are created at a micro level are still inexact is an interesting analogy for how larger projects can have an inexact nature to them. What's more, Koblin's visuals are compelling and unexpected.
By contrast, the visualizations that depend on external data seem to suffer from a forced feel of trying to hard to be futuristic or "different", but in fact ending up being cliche. The idea of mapping flight patterns or telephone lines has been done a million times, while the "House Of Cards" video is utterly reminiscent of needle pads that fit-form to represent shapes.
Out of all of the pieces, my favorite is probably the sheep market: the representations are novel and humorous, the data collection interesting, and the representation enjoyable to navigate. In short, it presents some serious concepts about data generation in the modern world while at the same time giving them a humanistic feel.
If anything, that is the shortcoming in Koblin's less enticing work: the absence of humanity and a feeling of overly-conscious attempts to be futuristic and technologically advanced. Part of this reaction to Koblin's work is probably driven by over exposure to faux-futurist imagery, but it's also a result of the fact that Koblin's more human works are simply more novel and easier to relate to.
Tuesday, October 13, 2009
Visualizing Data: Jan Tschichold, Graphis, And Josef Müller-Brockmann
This week in Visualizing Data, we were asked to look into the work of three designers: Jan Tschichold, Graphis, and Josef Müller-Brockmann. All three are Swiss, and based mainly in the mid-20th century. Moreover, the three also seem to have a unified aesthetic sense that is based firmly in a simple, rigid, and linear form.
While the three certainly create some interesting images, I'm not totally sure what it is that might distinguish Swiss Modernism from Modernism in general. Moreover, I found these three in particular somewhat difficult to research, and their presence on the web is not quite as prevalent as that of their more famous colleagues.
That being said, it's clear that there is a unity amongst their work: all three seem to lean towards geometric designs that employ simple fonts, geometric shapes, and stark colors. What's more, all three are hailed as innovators in this area. As such, it may be that their innovations and style seem more mundane in today's climate where many of their stylistic choices have become part of the mainstream.
While the three certainly create some interesting images, I'm not totally sure what it is that might distinguish Swiss Modernism from Modernism in general. Moreover, I found these three in particular somewhat difficult to research, and their presence on the web is not quite as prevalent as that of their more famous colleagues.
That being said, it's clear that there is a unity amongst their work: all three seem to lean towards geometric designs that employ simple fonts, geometric shapes, and stark colors. What's more, all three are hailed as innovators in this area. As such, it may be that their innovations and style seem more mundane in today's climate where many of their stylistic choices have become part of the mainstream.
Wednesday, October 7, 2009
"Mystery Data CSV" Parsing
This week for Visualizing Data we were given a "mystery" data set, along with some hints that the set (wink, wink) might contain x-y coordinates. This was an exercise in not only parsing CSV's, but also in taking in data and deriving meaning from it.
A quick parse of the code revealed the x-y coordinates, and quickly demonstrated them to represent a map of the world. The third (data) value was indeterminate, but appeared to represent some sort of variable (population, energy consumption?) associated with more populous areas. When used as a pixel's alpha value, the picture came to quickly represent the well known maps of earth from space at night.
While this was all well and good, it didn't seem to reveal anything about the data, other than that it was exactly what it appeared to be, and that there was world wide trending. However, in an effort to possibly determine slightly more about it, I decided to project the data values into the y axis, and the y axis into z space. This meant that the map was being rendered horizontally, with the height of the map representing data at a given point.
Once this was done, it revealed a few more interesting facts about the data:
1) Despite the "hot spots", there's not a particular are of the world that doesn't have high data points. The points are universally high and low across the breadth of the map.
2) The data appears to be highly stratified across the map, resulting in data "rows" on the y-axis. While I can't be sure why this might be, it seems likely that these "rows" are the result of estimates or rounding employed in the data collection.
Overall, this exercise allowed me to parse CSV's, which is relatively trivial. However, it also forced me to look at the data a little more closely, and in doing so revealed some facts that might have been otherwise overlooked in the 2D model.
Download code by clicking here
A quick parse of the code revealed the x-y coordinates, and quickly demonstrated them to represent a map of the world. The third (data) value was indeterminate, but appeared to represent some sort of variable (population, energy consumption?) associated with more populous areas. When used as a pixel's alpha value, the picture came to quickly represent the well known maps of earth from space at night.
While this was all well and good, it didn't seem to reveal anything about the data, other than that it was exactly what it appeared to be, and that there was world wide trending. However, in an effort to possibly determine slightly more about it, I decided to project the data values into the y axis, and the y axis into z space. This meant that the map was being rendered horizontally, with the height of the map representing data at a given point.
Once this was done, it revealed a few more interesting facts about the data:
1) Despite the "hot spots", there's not a particular are of the world that doesn't have high data points. The points are universally high and low across the breadth of the map.
2) The data appears to be highly stratified across the map, resulting in data "rows" on the y-axis. While I can't be sure why this might be, it seems likely that these "rows" are the result of estimates or rounding employed in the data collection.
Overall, this exercise allowed me to parse CSV's, which is relatively trivial. However, it also forced me to look at the data a little more closely, and in doing so revealed some facts that might have been otherwise overlooked in the 2D model.
Download code by clicking here
Monday, October 5, 2009
Some Thoughts On The Gotham Typeface
This week in Visualizing Data, we were asked to consider the Gotham typeface and answer a few questions. Here are my thoughts.
What is the “Gotham” typeface and what is its design inspired by?
The Gotham typeface is a typeface commissioned by GQ magazine in an attempt to find something new, geometric, and masculine. The typeface was inspired by the type on the buildings of "old New York", specifically the Port Authority terminal.
What type foundry drew and released Gotham?
Gotham was drawn and released by the foundry Hoefler & Frere-Jones.
How much does this type foundry charge for the “Gotham Bundle” for a single computer?
The "Gotham Bundle" sells for $69.00 on the H&F-J site.
How does that make you feel about fonts you’ve pilfered (if you have done so)?
Frankly, I think that typefaces should be free when used outside of a business context. The concept of "owning" a typeface even seems silly in a general sense, but is a necessity for foundries to exist. That being said, as a student and/or creative artist the likelihood that I would ever personally pay for a font is precisely zero.
And briefly, who is Matthew Carter and what did he contribute to digital typography?
Carter is a typographer who began working in the 1960's as an apprentice. He later (in 1981) went on to start one of the digital-specific foundries, Bitstream. The significance of his contribution can be most easily summed up as being one of the pioneers of developing fonts that are tuned specifically to have a high degree of readability on a computer screen.
What is the “Gotham” typeface and what is its design inspired by?
The Gotham typeface is a typeface commissioned by GQ magazine in an attempt to find something new, geometric, and masculine. The typeface was inspired by the type on the buildings of "old New York", specifically the Port Authority terminal.
What type foundry drew and released Gotham?
Gotham was drawn and released by the foundry Hoefler & Frere-Jones.
How much does this type foundry charge for the “Gotham Bundle” for a single computer?
The "Gotham Bundle" sells for $69.00 on the H&F-J site.
How does that make you feel about fonts you’ve pilfered (if you have done so)?
Frankly, I think that typefaces should be free when used outside of a business context. The concept of "owning" a typeface even seems silly in a general sense, but is a necessity for foundries to exist. That being said, as a student and/or creative artist the likelihood that I would ever personally pay for a font is precisely zero.
And briefly, who is Matthew Carter and what did he contribute to digital typography?
Carter is a typographer who began working in the 1960's as an apprentice. He later (in 1981) went on to start one of the digital-specific foundries, Bitstream. The significance of his contribution can be most easily summed up as being one of the pioneers of developing fonts that are tuned specifically to have a high degree of readability on a computer screen.
Theme And Variation
In the Theme And Variation assignment, we were required to use only two input values: a string of text, and a single integer. We then would use these two input variables, along with only black and white, to render a visualization of the input data.
My idea was to create a grid of 27 circles representing the alphabet, with the last circle being for special characters. I would then blackout the circles to relay the string to the viewer. This worked well, but created a somewhat standard uniformity across the grid. To remedy this, I offset the letters being displayed going from top to bottom. This allows for patterns that create more of an animation. Changing the input string changes the look of the animation, while changing the input int changes the frame rate of the rendering.
The version above is actually the second version of the assignment, with some refinements and changes. Specifically: I fixed the opening "grid setup" to use an independent frame rate, so that you don't have to wait around when you use a small int. I also added the "growth" of the rendered circles, so that they no longer appear to flash on and off, which results in a smoother animation. Finally, I revised the code to be more character-agnostic, since this aided me in animating the circle "growth".
Download Code: Theme and Variation
My idea was to create a grid of 27 circles representing the alphabet, with the last circle being for special characters. I would then blackout the circles to relay the string to the viewer. This worked well, but created a somewhat standard uniformity across the grid. To remedy this, I offset the letters being displayed going from top to bottom. This allows for patterns that create more of an animation. Changing the input string changes the look of the animation, while changing the input int changes the frame rate of the rendering.
The version above is actually the second version of the assignment, with some refinements and changes. Specifically: I fixed the opening "grid setup" to use an independent frame rate, so that you don't have to wait around when you use a small int. I also added the "growth" of the rendered circles, so that they no longer appear to flash on and off, which results in a smoother animation. Finally, I revised the code to be more character-agnostic, since this aided me in animating the circle "growth".
Download Code: Theme and Variation
Thursday, September 17, 2009
Solving A Rubik's Cube, or, Exercises In Extreme Boredom
After a number of vested attempts (both in the intellectual and physical world) to inspire myself in the art of solving a Rubik's cube, I found myself completely uninspired and apathetic about the task. Solving the Rubik's is an exercise in algorithm repetition and color matching, neither of which particularly appeal to me.
While it may not appeal to me, it was nonetheless a requirement of my class to demonstrate how the cube could be solved, and I needed a solution to that problem. As such, for those who are truly interested in solving the Rubik's, I offer up these fine options:
- Watch the video above. It's step 1 in an extremely (almost an hour!) lengthy tutorial. I started nodding off on step 4, but if this really interests you, the tutorial is exhaustive and complete.
- Go to this link. It's a Rubik's solver where you input your cube configuration, and it provides you with a solution.
- If you're more code-minded, go here. It's the source code for the above mentioned solver, and should lend coders an algorithmic insight into solving the cube.
Monday, September 14, 2009
Sol Lewitt At Columbus Circle
The Architect's Newspaper Blog has a nice mention of a Sol Lewitt installation that was unveiled yesterday at the Columbus Circle subway station. Seems like a prime candidate for an ITP field trip!
Sunday, September 13, 2009
Medical Data, Trending, And Visualization
Wired has a pretty nice piece about aggregate medical data trending, and some pretty solid visualizations to go along with it. Definitely worth a look.
Reactions To "As We May Think"
Finished up reading Vannevar Bush's "As We May Think" from the 1945 Atlantic, and have to say it was pretty entertaining. Everything about the article is quaint, from the tone, to the ridiculously involved solutions to problems that have since ceased to even exist. That being said, Bush's insight is remarkable, and he truly offers an amazing amount of foresight in regard to the problems that would be facing the world of information technology in the coming decades.
One interesting component of the article is how evident it is that it was written pre-transistor. Almost all of Bush's solutions center around vacuum tubes and microfilm as the innovations of the day. This leads them to be surprisingly involved, and often overly complex. By contrast, the transistor enabled many of his innovations to be implemented in simple and beautiful ways. It's yet another clear reminder of just how much the transistor transformed the climate of technology.
One interesting component of the article is how evident it is that it was written pre-transistor. Almost all of Bush's solutions center around vacuum tubes and microfilm as the innovations of the day. This leads them to be surprisingly involved, and often overly complex. By contrast, the transistor enabled many of his innovations to be implemented in simple and beautiful ways. It's yet another clear reminder of just how much the transistor transformed the climate of technology.
Subscribe to:
Posts (Atom)