
Introduction
Welcome to Tomorrow’s Blossoms, an innovative AI art project that explores the intersection of human creativity and machine learning. Conceived by multimedia artist Selina Scerri and AI expert Dr. Angelo Dalli, this groundbreaking project merges technology and artistic expression, creating an immersive exhibition that transforms complex datasets into captivating visual art. Through data dramatization, Tomorrow’s Blossoms offers a unique experience that fascinates viewers and fosters a deeper appreciation for the potential of AI in artistic endeavors.






How it All Began
The adventure began four years ago with a suggestion from Dr. Dalli. At that time, AI art was a nascent and mysterious field. Artists, including Scerri, were unsure of what AI-generated art could look like. Most believed it had to be something strange and alien, completely different from human-made art. The journey started with Generative Adversarial Networks (GANs), a system initially designed for security purposes, like teaching computers to recognize objects by feeding them vast amounts of data.
The Origins of AI Art
GANs, the first tool used in this project, were pivotal in the early stages of AI art. These systems worked by having two neural networks—the generator and the discriminator—compete against each other to create realistic images. The challenge was to figure out what kind of data to feed into these networks to create the desired art. Deep questions about human creativity arose: What drives an artist to create? How do emotions influence an artist’s work? Scerri knew that her emotions—whether happy, sad, or angry—affected her choices in color, texture, and boldness. These elements make each piece of art unique, but how do you teach an AI about emotions?



Collecting Emotional Data
Dr. Dalli and Scerri embarked on a fascinating journey to explore how people express their emotions visually. They gathered millions of images tagged with emotions like #love, #surprise, #anger, #joy, and #sadness from websites like Flickr and Instagram. Analyzing these images individually was impractical, so they were examined in groups. This analysis led to the creation of color palettes based on each emotion and the development of “emotional lines.” For example, images tagged with #joy often featured landscapes with straight lines and pastel colors, while images tagged with #anger showed harsh lines and zigzag patterns.
Creating Emotional Palettes and Lines
From the grouped images, Scerri created color palettes that represented each emotion. These palettes were essential in teaching the AI to understand and replicate human emotional experiences in its artwork. The project also identified distinct emotional semiotic symbols within the images, which conveyed underlying feelings with striking clarity. This approach enabled the AI to create visual symbols for each emotion, making its art resonate more deeply with human viewers.




Experimenting with AI and Flowers
One of the initial tests involved combining hand-drawn flowers with real flower data from Oxford University. Scerri drew over 1,000 different flowers, each unique but in a similar style. This dataset was then combined with real flower data and fed into the GANs. The result was an alien-like flower that blended drawn and real elements in fascinating ways. This experiment showcased the potential of AI to merge human creativity with real-world data, producing unique and compelling artwork.
The Evolution of AI Art Tools
As the project progressed, new AI art tools like DALL-E and Stable Diffusion emerged. DALL-E, developed by OpenAI, could create images from text prompts by collecting a vast amount of data from different artists on the internet. Seeing this, the team decided to pause and observe how quickly this technology was evolving. The rapid development of DALL-E and similar tools was astounding, and they wanted to fully understand its potential before moving forward.




Stable Diffusion
Stable Diffusion represented another exciting development in AI art. Unlike GANs, which create images through a back-and-forth process between two neural networks, Stable Diffusion uses a different approach. It starts with a noisy image and gradually refines it into a clear and detailed picture. This method allows for more control and precision in creating artwork. With Stable Diffusion, the team could generate high-quality images with intricate details that were previously difficult to achieve. This opened up new possibilities for blending different styles and experimenting with more complex visual concepts.
Project Description
Tomorrow’s Blossoms strives to bridge the gap between human emotional experience and machine learning, exploring the depths of what drives human creativity beyond technical skill. The collaboration between the artist and the AI becomes a dynamic exchange where the machine not only learns from human input but also exhibits an emergent form of creative behavior reflecting the emotions and conceptual ideas that fuel human art.
Humans create art as a means of expressing their internal world—emotions, thoughts, memories, and experiences—in a tangible form. The act of creating art is deeply intertwined with the artist’s emotional state and can serve as both an outlet for and a reflection of their innermost feelings. The emotional swings artists experience during the creation process are significant and varied, encompassing a wide spectrum of feelings such as inspiration, frustration, joy, sadness, and a sense of accomplishment.
Phases of the Creative Process
Inspiration and Motivation
The initial spark of inspiration can be driven by a profound emotion, an intriguing idea, or a compelling narrative. This stage often comes with excitement and eagerness to bring the vision to life.
Frustration and Struggle
As artists work through their creative process, they may encounter obstacles and challenges, leading to feelings of frustration or self-doubt. The struggle to perfect a technique or convey a concept accurately can be emotionally taxing.
Flow and Immersion
When artists enter a state of flow, they become fully immersed in their work, often losing track of time and external surroundings. This state is typically accompanied by feelings of satisfaction and a deep sense of connection to the work.
Joy and Fulfillment
Completing a piece of art can bring immense joy and a sense of fulfillment. Seeing the final product that began as a mere idea can be incredibly rewarding.
Vulnerability and Exposure
Sharing art with an audience exposes the artist’s inner world to public scrutiny. This can evoke vulnerability, anxiety, and apprehension but also pride and validation if the work is well received.
Reflection and Critique
Post-creation, artists often reflect on their work, analyzing their techniques and emotional responses throughout the process. This reflection can lead to further growth and development in their artistic journey.
Tomorrow’s Blossoms encapsulates these complex emotional swings and translates them into a form that an AI can understand and replicate. By collecting vast amounts of data from public photo-sharing websites and scientific datasets, the project aims to feed a machine-learning algorithm with images that represent essential human life concepts. This AI is then trained to mimic the emotional experiences of humans in the creation of art.
Dataset Creation and Image Processing
To generate a substantial dataset for the AI, Scerri had to devise an efficient and time-effective method to produce 400 images per emotion. After deliberating on various approaches, she shifted from using white paper to colored paper, aligning with hues from the designated color palettes associated with particular emotions. This simple adjustment significantly enhanced the image quality and allowed for the creation of a more cohesive dataset.
Scerri used spray paint, acrylics, and crayons due to their smooth application, which facilitated scanning. This choice ensured consistency across the images and streamlined the overall process. The images were first processed using Dalli’s Universal Machine Artist (UMA), initially with a video GAN. These were later blended with other datasets, including images of thousands of flowers, hedges, natural scenes, and city scenes, provided by Microsoft, the University of Oxford, and Nvidia.
Mastering Stable Diffusion
Mastering Stable Diffusion marked another significant milestone in the project. Early collaboration with Stability AI provided access to Stable Diffusion models from their inception. The addition of Stable Diffusion to UMA substantially improved the range of tools available, enabling results that exceeded initial expectations. Unlike many other AI art applications, Stable Diffusion aligned more closely with the project’s vision based on the outcomes it produced.
Digitization and Procreate Enhancements
To enhance the images further, Scerri transferred each analog image to Procreate and introduced digital techniques to each one. This transformation yielded stunning images that, when paired with custom prompts, took on a newfound depth and power. The iterative process of applying AI techniques and then enhancing the results digitally demonstrated the value of continuous improvement and refinement in AI projects.
Creating Videos
Using the amassed data, the team dramatized the data by following Kurt Vonnegut’s emotion line in the story of Cinderella, resulting in a mesmerizing 10-minute video. Video Stable Diffusion models was utilized to create moving images that captured the project’s essence. This integration of AI with creative storytelling techniques produced compelling and engaging content, showcasing the versatility of AI in art and entertainment.
Adapting for Planetarium Display
Initially planned for a conventional art gallery, the exhibition found a new home in Esplora’s planetarium dome in Malta. Adapting the video content for the dome-shaped projection involved several technical challenges. The video had to be reformatted using specialized software to wrap the visuals around the curved dome. Additionally, the planetarium’s calibration for dark night colors required comprehensive color and contrast adjustments to ensure visibility and aesthetic alignment. Some segments needed to be eliminated due to their incompatibility with the dome’s display capabilities.
The immersive environment of the planetarium also necessitated reworking zoom levels and camera movements to avoid disorientation or motion sickness in viewers. Slowing down the video playback and smoothing out rapid camera movements ensured a comfortable viewing experience. These technical modifications allowed the team to leverage the planetarium’s advanced projection technology to create a unique and engaging exhibition of Tomorrow’s Blossoms.
Acknowledgements
Tomorrow’s Blossoms was funded by the Arts Council Malta Project Support Scheme, grant reference number PSG71-21-513. We would like to thank the Arts Council Malta, the CSAI Foundation, Esplora and Xjenza Malta for their continuous support. We would also like to thank our collaborators in this project: UMNAI, 111 Art, Stability AI, Runway AI, University of Oxford, Microsoft, Nvidia and Open AI. Music provided by Moby and mixed by JJoy. Finally, we would like to thank Jesper Scerri, whose contribution on software engineering and infrastructure and visual editing made this project possible.