Week 4.1: Automated Content Creation
Week 4.1 Resources: Automated Content Creation
Required Software & Accounts
In addition to the previous tools (VS Code, Python, Claude Code), this week requires accounts for media generation.
- Fal.ai: Used for AI image generation (Flux/Nano Banana models).
- ElevenLabs: Used for AI text-to-speech generation.
- FFmpeg: Required for video assembly (command line video editing).
- Windows:
choco install ffmpeg(requires Chocolatey ) - Mac:
brew install ffmpeg(requires Homebrew )
- Windows:
Class Prompts
You can copy and paste these prompts directly into the Claude Code input bar to replicate the content creation workflow.
1. Project Structure & Research (Sub-Agents)
Create the project folders:
create projects/puppies/Research using a Sub-Agent:
use a sub agent to do research on reddit to find surprising and interesting facts about puppies
gather 10-20 facts
save to projects/puppies/data/research.mdCurate the best facts:
based on the research doc, choose the 5 most interesting facts and save those to chosen-topics.md2. Scripting & Refining
Draft the script:
based on our chosen topics write a short form video script
1 sentence per line
avoid AI cliches like "and here's where it gets interesting"
save it script.mdCritique and fix the script (Sub-Agent):
using another sub agent, research best practices for short form video content on youtube. based on that research, evaluate your script. make edits if necessary3. Visual Planning (Segments)
Create the image generation plan:
now create segments.yaml
a list of segment objects, where each segment will show one image and play 1-2 lines (depending on the line length)
the objects should include:
- a list of the lines
- an id (simple integer)
- an image prompt to generate an image for those linesFixing short segments (Logic Check):
the last segment only has one very short sentence. either add another line to that segment or make the CTA longer. also update script.md4. Image Generation & Editing
Generate the images:
READ docs/tools.md
READ projects/puppies/data/segments.yaml
for each segment create projects/puppies/media/[id].pngEdit specific images (The “Green Puppy” fix):
edit 1.png save to 1.2.png with a make him greener promptUpdate the data file with paths:
UPDATE projects/puppies/data/segments.yaml
for each segment add the relative path to the image
use "image" for the key
- 1.2.png for the first one
- make the paths relative to the project root5. Audio Tool Creation
Research the API (Date Injection Trick):
WEB RESEARCH eleven labs api
- make sure you study the latest version
- it is currently January 2026Create the Python Tool:
CREATE tools/elevenlabs_tools.py
DEF create_audio_from_text(text, output_path)
VERIFY that the function works by running it
- the api key is in .env under ELEVENLABS_API_KEYUpdate Documentation:
the documentation actually should have been added to docs/tools.md. move it.6. Audio Generation & Assembly
Map lines to audio files:
create projects/puppies/data/lines.yaml
- a list of objects that include the line text
- path to the audio relative to project rootGenerate the audio files:
READ docs/tools.md
READ projects/puppies/data/script.md
for each line in the script use your eleven labs tools to create audio for that script
save them to projects/puppies/media/[line number][first few words].mp3Assemble the Video Prototype (FFmpeg):
READ projects/puppies/data/segments.yaml
READ projects/puppies/data/lines.yaml
use ffmpeg to combine them all into a video
- combine all the line audios in order
- display the image for each segment for the combined duration of line audios in that segment
save result to output/puppies/puppies.mp4Last updated on