diff --git "a/.gitbook/assets/Screenshot 2025-01-04 at 12.52.36\342\200\257AM (1).png" "b/.gitbook/assets/Screenshot 2025-01-04 at 12.52.36\342\200\257AM (1).png" new file mode 100644 index 0000000..5f44b33 Binary files /dev/null and "b/.gitbook/assets/Screenshot 2025-01-04 at 12.52.36\342\200\257AM (1).png" differ diff --git "a/.gitbook/assets/Screenshot 2025-01-04 at 12.52.36\342\200\257AM.png" "b/.gitbook/assets/Screenshot 2025-01-04 at 12.52.36\342\200\257AM.png" new file mode 100644 index 0000000..5f44b33 Binary files /dev/null and "b/.gitbook/assets/Screenshot 2025-01-04 at 12.52.36\342\200\257AM.png" differ diff --git a/SUMMARY.md b/SUMMARY.md index 6981ca7..14cab54 100644 --- a/SUMMARY.md +++ b/SUMMARY.md @@ -28,6 +28,7 @@ * [LipSync videos with Custom Voices](guides/how-to-use-ai-lip-sync-generator/lipsync-videos-with-custom-voices.md) * [Set up your API for Lipsync with Local Folders](guides/how-to-use-ai-lip-sync-generator/set-up-your-api-for-lipsync-with-local-folders.md) * [Tips to create great HD lipsync output](guides/how-to-use-ai-lip-sync-generator/tips-to-create-great-hd-lipsync-output.md) + * [Frequently Asked Questions about Lipsync](guides/how-to-use-ai-lip-sync-generator/frequently-asked-questions-about-lipsync.md) * [🗣️ How to use ASR?](guides/how-to-use-asr/README.md) * [📊 How to create language evaluation for ASR?](guides/how-to-use-asr/how-to-create-language-evaluation-for-asr.md) * [How to use Compare AI Translations?](guides/how-to-use-compare-ai-translations/README.md) diff --git a/guides/copilot/README.md b/guides/copilot/README.md index df00541..10c1673 100644 --- a/guides/copilot/README.md +++ b/guides/copilot/README.md @@ -30,6 +30,10 @@ In practice, AI copilots and chatbots are generally useful at three broad functi Our approach with Gooey.AI is that we learn best when we can see work of others and hence, please check out [https://gooey.ai/copilot/examples](https://gooey.ai/copilot/examples). +

Barebones GPT Copilot

Farmer.Chat

Health

+ + + ## What we offer [Gooey.AI's Copilot](https://gooey.ai/copilot) is the most advanced chatbot maker in the industry. It offers: diff --git a/guides/how-to-use-ai-lip-sync-generator/frequently-asked-questions-about-lipsync.md b/guides/how-to-use-ai-lip-sync-generator/frequently-asked-questions-about-lipsync.md new file mode 100644 index 0000000..ae44b13 --- /dev/null +++ b/guides/how-to-use-ai-lip-sync-generator/frequently-asked-questions-about-lipsync.md @@ -0,0 +1,19 @@ +--- +description: Some answers to common issues when implementing Lipsync in production +--- + +# Frequently Asked Questions about Lipsync + +### Q: Can we process a full video while only applying lip sync to specific segments? We're trying to personalize welcome messages, changing one word {name} to the user's name. For example: "Hey {name}, thanks for signing up for our service." We only need the user's name to be lip-synced. + +A: We don’t offer this yet, but you can make API calls with only the {name} and stitch it with a fixed video that says "..thanks for signing up for our service.". + +Here is a Gooogle Colab sample to set this up with our API: + +{% embed url="https://colab.research.google.com/drive/1qnqVW7H2fiuV3RVMNPtVeaVkBhb7c898?usp=sharing" %} + +### Q: I'm getting a message on lipsync projects about "truncated to 250 frames". What does this mean? + +
+ +A: 250 frames cut-off applies to the input video you have added to the Lipsync workflow - if your video is 24 fps the output will be \~10s, at 60 fps it'll be \~4s and so on. If you use an image for the lipsync input instead of a video, the default if 25fps so thats \~10s again. There is no cut-off once [upgrade](https://gooey.ai/account/billing/) to a paid subscription!