Background and Process
Attempting to explore interplay between language based reasoning and generative algorithms, Consensus came about as an algorithmic solution during 2022. The entire solution itself combines strengths of both classical generative algorithms along with machine learning models trained to output a certain style and feel of visuals.
Fundamentally, the Consensus algorithm consists of two layers of processes. First layer relies not on machine learning models but simple algorithmic logic in pure code. This first layer will divide the canvas into two opposing grids. The values that make up these grids are extractions from an input hash that will initially be fed into the algorithm.
These two opposing grids, matrices of contradicting integer values, will now interact and meet, at which point a dynamic play of integration unfolds.
This new set of values are used to divide the canvas into a final composition of canvas divisions that will make up the actual layout of the final artwork. The original inspiration for this emergent plane came from the navigation of contradicting propositions and ideas that are present within dialectical thinking where contradictions are not viewed as problems that are to be directly solved, but as the dynamic tension necessary for new truths to emerge.
This new set of values is used to divide the canvas into a final composition of canvas divisions that will make up the actual layout of the final artwork. The original inspiration for this emergent plane came from the navigation of contradicting propositions and ideas that are present within dialectical thinking where contradictions are not viewed as problems that are to be directly solved, but as the dynamic tension necessary for new truths to emerge.
Illustrated below are examples of final compositions that are the results of the values emerging in this tension plane. This way, the general layout and setting for the actual artwork will be the end result of the initial posed contradictions of the opposing grids in the first layer of the Consensus processes.
The second layer of the Consensus solution is the machine learning model generating visual output. This is a model that comes out of a training algorithm which has undergone iterative updates throughout 2021-2023. The core functionality of this model is taking verbal spoken input on which a series of analysis are performed. These end up making tone of voice have a configurable impact on how the final text-to-image output will visualise. This allows for spoken input in the second layer to be transformed into strokes and shapes of colour, imposed onto the grids from the tension plane defined in the first layer.
As the model interprets the spoken input, configuration of the model can have it respond to tone of voice with certain choice of colours or certain choice of patterns. Likewise the choice and composition of words affects colours and patterns emerging on the canvas. Like this the first layer and the canvas composition that emerged from the tension plane transforms into a dynamic canvas in the second layer where voice can be used as a versatile brush.
Examples above demonstrate execution of the second layer where the machine learning model generates strokes imposed according to the composition defined in the first layer.
In total, the first layer generates and configures the composition of the canvas, relying on values emerging from the collision of contradictory grids. Adhering to this defined composition the second layer will draw with strokes and colours its interpretation of the spoken input. This is the combined solution originally labeled Consensus.
The first Consensus artwork
Using the Consensus algorithm, a small handful of artworks were created as the very first outputs to be generated using the finalised solution. The very first of these few artworks is a significant output, a precursor to all future works that have since then been created using this solution. This first original artwork has been named as the algorithm itself, ‘Consensus’, as this artwork came to be as part of the creation of the very solution.
Throughout the tuning and configuration of the algorithm, a certain colour scheme was worked into place. The application of the generative output from the machine learning model allows for the colours to “bleed”, while staying more or less consistent with the main input colours throughout the canvas. This introduces offspring colours that have come about as a product of “bleed”. Below are highlighted both the intended main input colours and the unintended offspring colours, along with the final Consensus artwork and close-up examples from the artwork.