Using TPUs in Google Colab? #365
Labels
No Label
bug
duplicate
enhancement
help wanted
insufficient info
invalid
news
not a bug
question
wontfix
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: mrq/ai-voice-cloning#365
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Is anyone using TPUs in Google Colab? If so, how can you do so? I'm assuming there might be different libraries that have to be installed, since simply selecting TPU runtime doesn't work. It would significantly speed up training and inference time. Thanks
I used Kaggle last time i used TPU. Google how to setup TPU in Kaggle. Then you’ll have to modify this repo code to use XLA device from PyTorch to run your code on TPU. Better choice is using PyTorch Lightning library and automatically using TPU.
Btw TPU memory footprint is notorious. I trained EfficientNet on P100 (16 GB) successfully but on TPU I had to significantly reduce the batch size.