Tweeting is such sweet sorrow…

The semester has come to an end, and with it, the time for which to submit this final assignment for DECO2606, Real-Time Multimedia, a ‘Design Computing’ elective at The University of Sydney.

I thoroughly enjoyed this unit and was very excited about the prospect of it even before enrolling. I was conscious about my experience (or lack thereof) with Processing as I hadn’t really used it since the ‘Design Programming’ unit in 1st semester; 1st year, so I started off with rehashing the basics.

Background

Being not much of a programmer, I decided against a game and went with something more “expressive” with a hint of data-viz geekiness, in other words, an attempt at creating computer generated “artwork” (little did I know that this whole ordeal would prove to be quite the challenge).

My concept involved obtaining data from Twitter which would then be used to somehow create pretty pictures on a screen, I made an emphasis that this would not strictly be a truthful and accurate visualisation of data but more of an ‘abstract’ piece that showed the possibilities of using real-life external data sets to create more meaningful art in Processing rather than arbitrary purely algorithmic creations. In the end this data decided to focus on tweets containing the word “love” and “hate” as I thought it would be an interesting concept to follow being such strong words.

Because my initial aesthetic for the piece revolved around the idea of spherical geometry, I spent a lot of time trying to get around the mathematics of it all, in 2D space it was usually fine, but when it came to 3D space I was losing my mind. Realising I was wasting too much time and coming to the conclusion that a single spherical object wouldn’t be expressive enough for my piece, I moved on.

Always being a fan of generative art, A-life systems, Cellular automata and the such, I looked into various work of this nature. I attempted experiments with systems like “Conway’s Game of Life” and “Langton’s Ant“, while not settling on either aesthetically, both proved to be important projects that sharpened my skills in Processing which allowed me to prepare for what was coming up.

Fig 1. Experiments with Langton's Ant inspired rules.

Fig 2. More experiments with Langton's Ant inspired rules.

At this point I realised I was trying too hard and I told myself that I should reevaluate my skill-set and my desired outcome. Whilst back in reality and going through past notes I had taken from the first few weeks of class I stumbled across the code used for simple collision detection between objects, this piqued my interest and I pondered on how I could translate this technique in a way that could create generative art. …and this is how my final aesthetic was born.

Process/Research

After my first iteration of my artwork was complete, we were required to present to the class our work so far, it was after this class that I received valuable feedback from my peers and also Rob, our lecturer and tutor for the unit. Some important things were raised like the mapping of the colours to the emotions and whether or not the message and feeling that I was trying to get across to the viewer was coming through enough. This puzzled me, not in such a way that I disagreed, but in a way that I didn’t realise the importance of these things in art and always assumed that they were subjective, which I guess it is, but there are also a few basic rules to at least loosely follow, again one that Rob brought up was that colours have different meaning depending on the culture. I struggled with this, I knew that my mapping of the colours wasn’t very strong, love was mapped to white and hate to red at the time, and I guess in my head it worked, but it wasn’t very evident.

I did some research into colours, this ranged from looking at the area of “Colour Psychology” which is the study of a range of things related to colour and the mind but I was primarily interested in the ability that colours potentially have to affect mood. One thing that was very annoying was in my research, there was never a definitive answer to what colour best describes a particular emotion, especially “hate”. I decided to go with the information provided in the paper “Mapping Emotion to Color” (Niels A. Nijdam. Human Media Interaction. University of Twente, the Netherlands), in the ‘Color wheel pro summary’ (table 4) it mentions that a negative trait of the color red is “offensive” and the emotion associated with it could possibly be “agressive” and “anger”, this gave me some relief that in some circumstances, “hate” can be mapped to red. So what of love then? Was I to stick with white? Or move to a cliche pink? On the same table white is listed as having a positive trait of “purity” and “innocence”, is love pure and innocent? Well I guess you would hope so. There was no data on pink but to us in a westernised country, you’d most likely come to know the mapping of pink to all things love. In the end, I decided on pink.

Another thing I realised when creating this sketch was that love is not limited to english, I would be missing out on a lot of tweets if I was just checking english tweets so I did some research as to what are the most popular languages on twitter and incorporated this into the sketch.

The sketch

So the sketch, thanks to twitter4j, is now using the Twitter Stream feature of the Twitter API rather than the basic search. What is happening is a stream of tweets containing the words “love” and “hate” are being constantly fed towards my sketch. Because my sketch is loosely based around utilising the geolocation of said tweets, I am making sure that I am only using tweets that have a geolocation attached to them. Each tweet, depending on its category, is then assigned its respective colour, pink for love and deep red for hate. These tweets are then plotted on the screen in a worldwide context. To show the extent of how naive I was when programming this, I innocently assumed that I would simply be able to map the longitude and latitude of each tweet to cartesian coordinates using the map() function, I was very wrong. Luckily with some research I managed to find an equation from an actionscript forum that was very easily translatable which did the trick. Now, as I said, these tweets are plotted on the screen based on their geolocation and the radius of the plot is dependent on the size of the tweet (longer tweet, larger radius), and then this is where the magic of Rob’s collision detection code comes in, from here what happens is if Tweets are within a certain (arbitrary) distance from each other, it sets off a chain reaction of ellipses and lines. If a love tweet ‘touches’ another love tweet, they are joined by a line and an ellipse is drawn around them, this is vice-versa for hate. What you then get is screen that is mostly pink with some red (thankfully the twitterverse is quite positive these days) that looks as if it is bleeding through, to finish it off, every once in a while the array is cleared and we blur the result for a nice aesthetic.

Reflection

While most certainly not attaining the level of aesthetic I envisioned, I found this to be a truly rewarding experience, it reinforced my belied in crowd sourced information and the possibilities that can be created with it, whether that be in my case art, or more interesting things like data visualisation to assist with digesting large amounts of information. I did have attempts at creating sketches for a few major cities but this wasn’t as easy as I thought, implementation wise it was not a problem but it was not achieving the aesthetic I was aiming for mainly due to the lack of tweets that are geolocated and that the only way to test the code properly for New York is to be awake at around 3-4am in the morning. I’m still very much interested in generative art and it’s something that I’ll have to have more of a play with to fully understand conceptually. In retrospect, rather than see this as a piece of art I am happy with, I see it as a primer for further investigation in generative art techniques and the abstract visualisation of ‘meaningful data’.

Thanks Rob!

Screenshots (process works and final)

Fig 3. Testing the the plotting of geolocated tweets. White = love red = hate

Fig 4. Further testing.

Fig 5. And even more testing. Circle size = tweet size

Fig 6. Testing of aesthetic elements.

Fig 7. Testing geolocation. Each point of origin is roughly where tweets are coming from, red represents hate tweets.

Fig. 8. Close to final aesthetic, circles represent where tweets are. Very obvious to see concentration around the 'shape of the US', Europe and Indonesia (refer to earlier sketches to get general idea of locations)

Fig 9. Final aesthetic. Plots are less intrusive but origin is still viewable (for the most part)

Posted in Uncategorized | Leave a comment

Particle Systems and Forces

After playing around with the particle systems and forces sketches from the labs, I’ve come to realise that they may be valuable in my concept that I have in mind.

Combining the forces with the particle systems as we have done in the labs would be a very useful addition to the aesthetics of my final concept.

With the forces, I would like to somehow stimulate them from an external source, whether this is by a feed from a camera or the microphone, I’m not sure yet, but manipulation of the forces externally would be really cool — and I always like the idea of interacting physically with digital systems.

Posted in DECO2606 | Leave a comment

Ideas/Concepts and other interesting tidbits

I am more interested in creating a sort of ‘toy’ rather than a competitive game-like experience. I would like to look into using external sources like sound (microphone) and video (camera) in my application to cause some sort of reaction; the user won’t necessarily know what they’re doing or what they have to do but will take part in it purely because it is engaging and immersive.

Another idea is an interactive musical instrument, I like the idea of the user creating the instrument to suit what they want to play and how they want to play. An interesting concept is the app ‘Squiggle’ which allows you to draw lines and use them as a stringed instrument.

‘Reactable’ is a well known interactive musical instrument that uses tangible elements but mobile versions also exist. 

I also like the idea of a generative art/music application.

Aphonium

 

Edit: Thanks to Rob for showing me Bubble Harp.

Node Beat

This is a visually impressive application.

Digital wallpapers are randomly generated and projected onto a wall using projectors. I could see this idea translated to a wallpaper creation application for portable devices. For example you could use the camera on the device to grab the colour palette and perhaps the actual camera feed and use these with other elements to create a unique wallpaper. In general I like the idea of using real-world and/or external data sources to create something unique.

Posted in DECO2606, Uncategorized | Tagged , , , , | Leave a comment