top of page

This project is a collaboration between NUA and UEA to produce an outcome for Rainbird Ai.

 

We have a small group working together on the project at it’s current stage and have had a few discussions about data visualisation in the context of machine learning.

 

Our group was given access to the Rainbird interface in April. This was effectively the start of the production stage with the project.

 

This is a screenshot of me creating a knowledge map in the Rainbird engine.

The rules of the Rainbird engine are based on “triplets” these are the interactions between the subject, object and relationships displayed.

 

In this example I’ve made a knowledge map which asks a user questions to be able to determine the likelihood that the user can drive safely.

 

The “driver” bubble here represents the subject and the relationship “drives” the object “car” which sits on the arrowhead of the relationship arrow.

This is the coding language which is essentially the previous diagram but presented in code. There are instances that need to be set between the knowledge maps to get them to function correctly.

 

Rainbird search’s it’s database to try and find a solution to the question. Each time it does this it will increase the overall effectiveness based on trial and error and salience of the answers to establish knowledge needed.

Once the knowledge maps are partially functional they can be exported and used. This is the evidence tree screen which the user receives after being given a result. The chart shows the instances and conditions to “show it’s working”.

The following images are smaller experiments for creating a narrative based around dialogue and result.

 

The error messages are based from my experience using an out of date video driver to run the latest version of Photoshop with GPU acceleration turned on. 

The visual effects are caused from the graphics card not being able to communicate with the operating system effectively resulting in a “confused” image which neither machine or user understands fully.

 

In the context of a dialogue box like this it builds up ideas around a disconnection in communication.

Continuing experiments.

The idea of visualising Rainbird’s thoughts is interesting to me because it’s inherently not human but there is some cross over that feels relatable.

 

The disconnection and miscommunication of data and language is a large part of what Rainbird does to manage it’s database. This short animation is just me thinking about how the previous visuals fit into this narrative. 

Contemporary interface is usually based around flat colour and limited opacity while in the previous decade from 2000 – 2010 user interface was meant to feel more futuristic by using Aqua transition effects in MAC OS X and Aero transparency in Windows Vista/7.

 

This is an experiment playing with image and time period to create contractions to construct narrative threads.

This is a trailer for the original Macintosh from 1984. This was at a time when personal computers were not seen by large corporations to be valuable in the context of normal peoples lives. Steve Jobs pushed back against companies like IBM who only wanted to create computers for the large business companies to use.

 

The video is communicating liberty by giving power to the everyday user. The character is a female dressed in a bright orange outfit running progressively through a tunnel while being chased. The colours are in complete contrast with each other to aid the narrative.

​

In the following years graphic designers adopted the Mac to be used for digital typography and image manipulation with Adobe releasing Photoshop exclusively on the platform.

These are some designs I have created from an app called PhonoPaper for iOS. The application renders a graphical representation of the human users spoken voice.

 

The outcomes are quite interesting because they remind me of an aesthetic related to ultrasound medical environments however this is used in a different context and is also very accessible.

In terms of the Ai project the digital marks made feel very human because of the inconsistencies in the production. I have previously talked about questioning the idea of the word “natural”.

 

Usually we would consider anything non man made to be natural however when in the context of a machine or computer this idea is distorted.

This is a screenshot from a web animation seen here. The animation was produced alongside the Language Game[s] symposium at Chelsea College of Arts. I did not attend this event, however, the hand out and website were really interesting in terms of the aesthetics used and the idea of machines having liberty.

 

Quite interesting to think about in the context of the previously discussed Apple 1984 ad which was about freeing technology for the masses. Now most of the population use devices like PC’s and iPhones because they are made to serve us.

These are wire frame representations of the 3D objects in a close perspective. They feel very procedural but also organised. There is a half way point in the centre of the image which divides the geometrical cityscape.

 

It makes me think of an an end sequence or a contradiction in velocity. The greyscale makes it feel very neutral and unemotional.

These images also make me question where I am in the space and how far do these surfaces spread before changing.

 

With the addition of typography this could completely change the relationship depending on the language used.

Variations and experiments using texture and scale within a set layout.

 

The identity of the designs have some cross over between real physical environments and virtually constructed ones.

With the addition of transparency and colour this obscures the organised appearance of the design, relating more towards an error or malfunction.

The symbol on the left used as a function alongside a word used as a greeting in English. Both of which cannot communicate with each other. The background aesthetic is biased towards the system function, this causes the “Hi” to feel trapped which also represents some personality and consciousness.

When altered into a 3D space the narrative changes in some ways. It feels like a relic from a previous event.

 

It does not feel human or completely designed by a machine, it’s somewhere in between.

I’m continuing to experiment with the idea of cross overs between language. In this design the “i” is cemented inside a virtual construction. This either makes me think of a point of interaction for example a keyboard as a human input, or a personality representing a virtual assistant like Apple Siri or Amazon Alexa. These software engines emulate a human verbal experience and learn based from human feedback.

 

This is a similar process to how Rainbird uses large data to be used as knowledge in the Rainbird engine.

These are some of the prints in 3D real space on Maylott’s presentation display. I think the project up to this point has gained a good amount of momentum going into the Future.

bottom of page