Is there any way to speed up Mac systems to be faster #1079
Replies: 10 comments 27 replies
-
|
did you select MPS as processor for your mac conversion? |
Beta Was this translation helpful? Give feedback.
-
|
Honestly until we find time (or someone creates a pull request 👀) that adds a TTS engine which has been optimized for MPS the performance isn't gonna be fantastic. Being able to run on MPS and actually being optimized for MPS are two different things. If you want faster processing times I would suggest using one of our smaller TTS engines in the meantime other than xtts Here I made a wiki page on comparing TTS speeds for different engines on M1 https://github.com/DrewThomasson/ebook2audiobook/wiki/M1-Mac-CPU-speeds |
Beta Was this translation helpful? Give feedback.
-
|
When I start off an ebook this warning message comes back (Mac M3Pro) does it mean anything |
Beta Was this translation helpful? Give feedback.
-
|
@blu3knight you could try on the coming update if it's better with MPS. I added some patch that can use full MPS. |
Beta Was this translation helpful? Give feedback.
-
|
Ok waiting for update but in the mean time I notice something strange when I ran some tests from command line. Checking for MPS in the Python environment I get that MPS is running, but when I enable the following command. I get the following in the logs. Which is strange considering that it should be using MPS. |
Beta Was this translation helpful? Give feedback.
-
|
provide the full log.... something wrong with torch and your mac. torch does not recognize your mac GPU apparently |
Beta Was this translation helpful? Give feedback.
-
|
the question is are you using the OS python or the E2A virtual python env when you run this script? |
Beta Was this translation helpful? Give feedback.
-
|
btw E2A xtts model uses already MPS... "Using MPS - VRAM capacity could not be detected. " |
Beta Was this translation helpful? Give feedback.
-
|
I have a M4 Max with 128GB RAM. I see no difference in performance between CPU and MPS, when selected. Takes about 3 min 30 sec, per 1%. Also, really cool project! |
Beta Was this translation helpful? Give feedback.
-
|
try last git or last release |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have two systems here that I have done tests on.
System 1 is a Mac M3 Pro with 38 GB of Ram
System 2 is a Intel System with NVIDIA GeForce RTX 3070 (8GB GPU Ram), 64 GB Regular RAM
For regular inference and every other AI function the Mac runs circles around the Nvidia graphics card. But for xtts the mac is extremely slow.
With my calculations the following is true to achieve 1% of the workload.
Mac = 16 min 19 seconds per 1%
Nvidia = 2 min 35 seconds per 1%
With this being the case the mac will be at a 100% from start at 27 hours and 11 secons
The Nvidia system would achieve the same 100% in 4 hours and 19 minutes
The Nvidia system is 84% faster then the Mac.
Is there any improvements that can be done or is this just the the way it is and the performance will just be this slow.
Beta Was this translation helpful? Give feedback.
All reactions