Team
Anuraj Bhatnagar
Grace Halverson(Portfolio)
Katherine Bennett(Portfolio)
Instructor: Dr Anne Sullivan
Jadu is a generative art project for the Computer as an Expressive Medium class at Georgia Institute of Technology.
PART 1
Introversion and Extroversion are two sides of a spectrum of behavioral patterns. Many people will self-identify as one or the other according to stereotypes, such as introverts prefer to be alone while extroverts prefer to party. These oversimplified stereotypes often cause misrepresentation and belittlement of an individual’s social potential.
The difference between how each sees the world and themselves in the world does not necessarily affect their willingness to participate as social beings, but rather how they respond to external stimuli.
In a conversational setting, the behavioral difference between personality types can be more obvious in the frequency, response time, and varying levels of intensity. We introduced elements of randomness to the appearance of both categories to represent a spectrum of potential within each. Although they are different, if the user allows both personality types to have an equal amount of ownership in a conversation, the result can be more beautiful than one or the other on its own. In our generative art project, we aimed to represent the presence of introvert and extrovert behaviors in a conversation.
We chose to represent extroversion as rectangles of varying color intensity, stroke weight, opacity, and size. The rectangles are continuously drawing at a higher speed than ellipses. In recognition that not all extroverts might aim to frame conversations, there is a possibility for the rectangle to be filled as well.
The introvert is executed by a right mouse click and is represented by a circle varying in color, opacity, and size. Each circle has the potential to take up as much of the conversation by making their presence as apparent as an extrovert, but the user must be more intentional in clicking the right mouse key.
On a standard mouse there are left and right click buttons. The left click is often referred to as the “normal” click and has a higher frequency of usage. The right click is alternatively known as the “hidden options”. Despite its smaller frequency of use, the right click has a lot of value – particularly it is often the button used to offer more information and more reflective explanations on something less readily understood to a user. When the right mouse is not pressed or held, the conversational space will be dominated by the continuous rectangle draw feature (extroversion).
Our generative art project encourages the user to take the time to include more introverted personalities and reap the beautiful benefits of conversational inclusion from both sides of the personality spectrum.
PART 2
TeaTime TableTop
This subproject serves as a exploration into the interactive potential of our Jadu Generative Art Project – an interactive visual representation of Introversion and Extroversion in a conversational setting. We posited that personalities exist along the lines of a spectrum instead of polarized opposites between introversion and extroversion. By allowing equal opportunities of representation for both introverts and extroverts through color, opacity, and frequency of shape placement, we aimed to challenge the common social perception as extroverted personalities being of higher importance and power.
In this part of the project, we have created a table top controller that further sets the contextual environment and encourages users to be more involved in their conversations. Additionally, while certain shapes have been placed strategically on the board, increased interaction by the users also allows for more of a randomized level of responses, reflected on a computer screen.
There are equal number of capacitive touch points for each shape, placed across the board, reflecting an abstracted place setting of the plate and teacup. This arrangement, really relies on body language and encourages interaction to create the optimal generative art. Giving opportunity to another to engage in a conversation as well as contributing to a social environment requires conscious consideration.
A big shout out to our TA, Tom Jenkins, our professor Anne Sullivan, and to DILAC’s Michael Vogel for their assistance in bringing our art to life.
Relationships between circuits and codes were new and daunting. The personalized insights of our instructors were therefore as instrumental in their reduction of group anxiety as in their expert contributions to meaningful making: Certainly we could and did look up helpful tutorials, examples, libraries, user group discussions, etc., but none of these resources could match the direct and particularized interactions with our human guides.
Tom helped us understand the Arduino library for capacitive touch and its translation to code. Initially, we downloaded the library and implemented the wiring on the Arduino for a single piece of foil. Then we prototyped a simple code to send printed text from Arduino to Processing. Processing has built-in functionalities to listen to the serial port, and we set it to listen to the port used by Arduino.
We developed this code by implementing it for two different pieces, using two different capacitive sensors. We connected both sensors in parallel, mapping one to the circle and the other to the square. We connected the circuit, using the breadboard to form a connection between the two pieces of foil and the Arduino pins (two for each).
Anne suggested that we send a string from Arduino to Processing as a conditional check, making Processing respond to it by drawing the shape that is mapped to the text contained within the string. So, we used the capacitiveSensor (samples) function on each capacitive sensor, to store the value in total1 and total2. Then, if either value exceeds a certain arbitrary threshold, Arduino would print out either “Circle” or “Square” in the serial, and Processing would pick that up and store it in a String. Based on Anne’s suggestions, we used the .equals() function to check whether the String stored a circle or a square. Depending on this condition, either the circle drawing or the square drawing function would execute. A brief algorithm follows:
Arduino
- total_z = capacitiveSensor_x_y(30)(where z is an arbitrary index no for the shape, and x and y are pin numbers)
- For each shape
- if total_z > 1000
Serial.print text(shape_z)(where shape_z is the shape mapped to z) - Else print no touch
- if total_z > 1000
- Repeat for every touch detected
Processing
- Store text in string var
- Avoid a NullPointerException by testing if var is NULL, and trim var to ensure no whitespace is stored to avoid IndexOutOfBounds
- if(var is text(shape_z))
- drawShape_z()
- Else print in console no touch
- Test for every shape and repeat previous step
- Repeat for each print in serial
At this stage, we ran into a problem — the problem of lag. There was a delay between Arduino’s execution and printing into serial. There was also a delay between Arduino’s storing the text in the string and Processing’s listening. These delays result in a time lag between touch inputs and on-screen shape formation. Based on feedback, we expect to accept and work with this delay, incorporating it into the designed interface experience, rather than resolving it.
We also ran into the question of frame rate. In the preceding project, we avoided a too-rapid proliferation of our generated shape patterns by regulating the frame rate. This regulation compounded the lag problem, as slow executions of either shape delayed executions of the other. Anne suggested that we use a conditional check on the frame count, using a modulus operation to draw a shape only every 20 frames. However, after further testing, we realized that the lag was regulating the speed already, and the framecount check added even more delay to the generative process. Therefore, we removed the conditional check.
There were two main things to be addressed with the code: the lag time between touch and output in processing and the sensitivity in the capacitive touch. As soon as Anne was available, we met with her to address the lag time.
With further testing, we came to the conclusion that the Arduino was printing too many strings to the Serial, and the Processing code was checking for too many strings. Our older code had the Arduino printing Square, Circle, or No Touch for the shapes, as well as the capacitive sensor values(total1 and total2) to the Serial, and Processing was checking for all 3 conditions(square, circle and no touch), and printing out to its own console as well. We had done this to test our prototype. However, this contributed to the lag.
So we commented out the printing of the no touch string, as well as the printing of total1 and total2. This, combined with modifying the processing code to only check for square or circle and do nothing if neither were touched, effectively eliminated the problem of lag.
Another issue we looked at was the threshold value for total1 and total2. Our code checked the values of total1 and total2 against a certain threshold to detect touch. However, this threshold value was heavily susceptible to certain conditions, such as temperature of the room, and thus required frequent modification. Therefore, Anne suggested that we calculate a baseline value for the first 100 frames that the Arduino runs in, and add a certain, arbitrary amount to that value, thereby giving us a dynamic threshold to compare total1 and total2 with. While this has been extremely helpful, we still find that there is variance in the threshold value, more specifically, in the arbitrary amount that we add to the baseline.
We further explored the sensitivity of the capacitive touch sensors. Merely hovering a hand over one of them would cause a reaction. This exploration helped reinforce our decision to move away from the tablecloth concept, so as to reduce the risk of interference of factors beyond the conversation participants.
When we originally decided to make a tabletop for the Generative Art project, we had speculated that a visual output might be distracting and detract from the social interaction on the participants end. This hypothesis proved true during our in-class demo. As shapes began to appear in processing, they eventually turned their bodies to face the screen instead of each other. Instead of engaging solely on their conversation, the temporal evolution of their interaction became dependent on the processing output. When each wave of reactionary shapes would appear, they would react with a feeling of success and then try harder to interact with the objects on the table instead of letting the conversation unfold naturally.
While we wanted to create a safe environment for sharing and for play, we hoped our project would encourage participants to engage primarily with each other and not so much with the screen. We wanted to create a way for processing to print the frames of the interaction into an mp4 file, so participants could see the visual representation of their interaction without having it influence the conversation. We could have used the saveFrame() method, but that would save each frame as a separate image. We decided to look for a library that would help us combine all the frames into a video and store the video in the same folder as the processing sketch. The Video Export processing library helped us in this purpose. The initial trials produced an extremely fast playback frame rate and did not accurately represent the timeline of the interaction. While we were able to slow the frame rate a bit, it is still faster than the actual interaction.
Video of processing prototype(note- the prototype begins to display 15-20 seconds in)
Images of the final tabletop are visible below.