AUDIOVISUAL
I developed an end-to-end workflow combining multiple AI models with Adobe's creative suite to produce extended, cohesive audiovisual art that transcends the fragmented nature of current generative tools. This process demonstrates AI as a profound augmentation and force multiplier for creative expression.
“Sonic Hypercube” demonstrates a pure generative workflow: taking images from DALL-E, animating them with Firefly, and stitching them together with a soundtrack in Premiere to create a singular audiovisual experience.
“Living Ink” extends the workflow to animate original vector art from Illustrator with both Firefly and Pika before synthesizing in Premiere.
“HarmoNex Radio” focuses on generating/animating hyper-detailed visuals in Firefly with Leonardo AI image upscaling and intentional audio syncing in Premiere. See nxs.design for more content on this artistic theme and exploration.
Strategic Multi-Platform Generation
Initial Assets: Create consistent visual foundation using Adobe Firefly, supplemented by DALL-E, Pika, and Leonardo AI for specific aesthetic qualities
Prompt Engineering: Developed "prompt families" with standardized elements to maintain visual coherence across generations
Cross-Tool Curation: Strategic selection based on each platform's strengths for specific visual elements
Sequential Frame Matching
Created technique for generating sequential 5-second clips with matching start/end frames
Beginning frame of each new clip visually matches ending frame of previous clip
Precise prompt engineering to maintain scene elements while allowing progressive evolution
Documented patterns for reliable frame matching across scene transitions
Created library of transition prompts that bridge different visual environments
Photoshop Enhancement Pipeline
Neural filter color grading across generations for visual continuity
Generative fill to expand images and connect scenes
Creation of composite transition bridges between stylistically different sequences
Premiere Pro Advanced Assembly
Seamless Transitions:
Frame-matched clips allow for nearly invisible cuts between generations
Morph-cuts for spicy visual transitions
Subtle speed ramping at clip boundaries for natural motion flow and sonic alignment
Music Integration:
Selected clips with rhythmic potential for synchronization
Mapped key visual transitions to musical beats and phrases
Used audio waveforms to time clip transitions
Results
Production of extended cohesive videos from dozens of 5-second generative clips
Seamless visual flow that conceals the fragmented nature of individual generations
Stage-quality output with synchronized audio-visual experience
Replicable production methodology for future projects