This is an experiment to provide short micro-learning videos using the research and synthesis capabilities of AI, along with an agentic process for production. Please let us know what you think by giving feedback on any individual video or the experiment as a whole.
All About AI - A Micro-Learning Experiment
Use the drop-down selector to choose a topic. If you start playing a video, you will need to stop playing it before choosing another, otherwise the original will play in the background until you refresh the page.
How These Videos Were Created
Each video was created by using a purposeful three-stage pipeline that combined three different AI models, each with unique strengths:
Stage 1: Initial Content Creation (GPT-4o)
The first AI model, GPT-4o from OpenAI, generated the initial training scripts based on the topics. It created natural, conversational narration for each slide, following a chosen research depth (Basic, Standard, or Comprehensive) to control the length and detail level.
Stage 2: Fact-Checking & Enhancement (Gemini 2.0 Flash)
The second AI model, Google's Gemini 2.0 Flash, reviewed the scripts for accuracy. It verified factual claims, flagged outdated information, and added critical missing context while maintaining the original word count constraints. Essentially, the role of a fact-checker, reviewing the content.
Stage 3: Language Polishing (Claude 3.5 Sonnet)
The third AI model, Anthropic's Claude 3.5 Sonnet, refined the language for clarity and engagement. It improved sentence flow, ensured consistent tone, and made complex concepts more accessible—all while keeping the facts intact and maintaining brevity.
Why Three Models?
Each AI has different strengths:
- GPT-4o excels at creative content generation
- Gemini is strong at factual verification and research
- Claude is exceptional at language refinement and clarity
By combining all three, the system produced scripts that should be accurate, engaging, and professionally written.
What Happened Next?
After the scripts were finalized:
- Gamma AI transformed the scripts into presentation slides
- OpenAI's text-to-speech converted the narration to natural-sounding audio
- We combined the slides and audio into the final videos