CAtN Final

Introduction

Two weeks ago, I came up with an idea that functioned more as a momentary placebo amidst finals and all the underlying chaos. I wanted accomplish some kind of way to give a more “magic realism” feel to a particular series of news reports (I initially thought of NYTimes). However, after bouncing around the idea in my head several times I decided it was not a something I was willing to test and possibly fail about due to time. I wanted to make an experiment that led me to something fun, and even an idea that might lead me somewhere else.

In any case, I decided to maintain the Magical Realism element. After taking a look at @MagicRealismBot in Twitter, as well as reading this interview with their creators Ali and Chris Rodley, the idea became clear. I recommend reading the interview, it is entertaining to see the mindset and the process behind such an effective and endearing tool.

I thought about some comments made in class about using GPT-2 to help with thesis process (Neta and Nicole?). In any case, I thought it would be fun to give some of the articles I usually read in one of my favorite New Media online platform, Creative Applications Network, a bit of a twist. Specifically, the way in which the website displays the entries for each article made it easier. I’ll explain in more detail below.

Process

Creative Applications Network

You can follow along this notebook I made in Google Collab. I decided to scrape all of the blog entries from the blog. By blog entry I mean the short pieces of text you see below. Every blog article had one. After scraping the 147 entries, I made a CSV file out of them so I could take a better look.

I noticed most of the entries (or at least the ones I was interested in) followed a similar pattern. “X project” is a “Y adjective to make it sound innovative” “Z New Media name” “Whatever it did”. I made a small diagram of it below.

Right away I thought this type of one-line format (after cleaning up the data a bit) would mix perfectly with Magic Realism Bot’s posts. (weird saying it that way) To clarify, the cleanup involved removing the names of the work, and of the artists. So I took all of those and formatted them into a format acceptable by spacy and the Markov chain algorithm. I wanted to make a script for it. This involved making a script that received a CSV file with one column of text and a name and outputs a new file with a ‘name’, ‘index’, ‘total’, and ‘text ‘ as column headers. All of this is in the notebook. I then proceeded to scrape tweets from the MagicRealismBot on Twitter.

Magic Realism Bot

I will not go into much detail about how I scraped it, since it is on the notebook. However, I just want to make clear I obtained about 3200 tweets, which I took a screenshot of below.

Spacy, Tracery and the fun part

After having both of my sources, I joined them into one CSV file. I processed the file using Spacy and Allison’s guide for a corpus-driven narrative generation. The whole thing was about 4800 entries.

I also used a copy of Allison’s notebook. With Allison’s notebook, I was able to use spacy. Spacy gave me my entities, as well as actions, verbs, objects, etc… which I used as an input for Tracery.

I made a couple of tests to see how it would go. And I think it went pretty well.

AND MY FAVORITE (which sadly I got too excited to remember to even screenshot):

“An interactive installation that facilitates collaboration between a human and a puddle of alcohol.”

Other remarkable examples:

A bisexual fisherman falls in love with the use of CCTV.

A professor reads a poem about an arduino that can destroy metaphysics.

A new life as a performance.

A theologian discovers that Wikipedia does not exist.

The dancer’s body is extended and manipulated as a tool to quantify the world.

An interactive installation and performance inspired by light rays traveling in a latent space of situations.

A project explores possible alternatives of how we experience the materiality of nature through the mediums of fiction.

Exploring behavior-based design systems that are self-aware, mobile, and self-structure / assemble.

To compensate for the lack of material I had from the Blog, I had to duplicate those entries several times, as well as some of the examples shown before. By the end it came out to a 3200(MRbot) vs. 1600 entries.(creativeappsnet + handpicked generated).

Output?

So I came around this talk by Kate Compton (creator of Tracery) who gave the most enlightening talk about procedural generation, and it occurred to me I did not want to leave the output of this experiment only in text.

So through this VR experiment I wanted to include several elements I became captivated by during the semester….such as the sense of waiting present in Epitaph, or the idea of spatial(enviromental) storytelling in Bitsy. Also, I felt some of these ideas were brilliant, and on a computer screen I seemed to get distracted with my other 127 tabs.

So ideally, you would be able to walk to other beams of light which generate text with other parameter values. ( If I was to use GPT-2 for example, each beam on a row would contain +0.1 in the temperature parameter). You would be able to pick up some of these and keep them.

EDIT: Worked!