Avatar V, Seedance 2.0 with your Digital Twin, Instant Highlights v2, and video native to your stack.
Hi there, We shipped 15 things in April. Look at them together, and the story writes itself: video is no longer somewhere you go. It's a capability baked into the tools you already use. Here's the short version of what's new and what you can try this week. |
Record one 15-second clip and generate full upper-body video of yourself across different outfits and settings, with the same on-screen presence. Our most advanced avatar model yet, built to retire reshoots. |
HeyGen is the first avatar platform to integrate Seedance 2.0 with verified human faces. The same Digital Twin you built for Avatar V gets cinematic motion, dynamic camera work, and up to three avatars in a single scene. |
Drop in a podcast, webinar, keynote, or interview. Type "the part where she talks about fundraising," and the AI finds it. Face tracking, multi-speaker, captions, translation into 175+ languages and dialects with lip sync in 4K. Same-day clips in your audience's language. |
Video where you already work |
New integrations live with Gamma (decks to avatar-narrated video) and Granola (meeting notes to recap video). Video stops being a separate project and becomes the natural step of work you're already doing. |
For builders, video is now code |
We open-sourced HyperFrames under Apache 2.0, a programmable video engine developers can fork, extend, and ship with. Compose scenes in HTML, drive them with JavaScript, render production-quality video from the CLI. Your agent can ship video with a single prompt. |
Thanks for building with us, The HeyGen Team |
|
|
Create videos like you write a doc. |
12130 Millennium Drive, Suite 300, Los Angeles, CA 90094 |
|
|
|
Comments