This app uses Apple’s Core ML Stable Diffusion implementation to achieve maximum performance and speed on Apple Silicon based Macs while reducing memory requirements. It also runs on Intel based Macs too.
Features
- Extremely fast and memory efficient (~150MB with Neural Engine)
- Runs well on all Apple Silicon Macs by fully utilizing Neural Engine
- Generate images locally and completely offline
- Generate images based on an existing image (commonly known as Image2Image)
- Generated images are saved with prompt info inside EXIF metadata (view in Finder’s Get Info window)
- Convert generated images to high resolution (using RealESRGAN)
- Autosave & restore images
- Use custom Stable Diffusion Core ML models
- No worries about pickled models
- macOS native app using SwiftUI
When using a model for the very first time, it may take up to 2 minutes for the Neural Engine to compile a cached version. Afterwards, subsequent generations will be much faster.
Compute Unit
CPU & Neural Engine
provides a good balance between speed and low memory usageCPU & GPU
may be faster on M1 Max, Ultra and later but will use more memory
Depending on the option chosen, you will need to use the correct model version (see Models section for details).
Intel Macs uses CPU & GPU
as it doesn’t have Neural Engine.
Models
You will need to convert or download Core ML models in order to use Mochi Diffusion.
A few models have been converted and uploaded here.
- Convert or download Core ML models
split_einsum
version is compatible with all compute unit options including Neural Engineoriginal
version is only compatible withCPU & GPU
option
- By default, the app’s model folder will be created under the Documents folder. This location can be customized under Settings
- In the model folder, create a new folder with the name you’d like displayed in the app then move or extract the converted models here
- Your directory should look like this:
Documents/
└── MochiDiffusion/
└── models/
├── stable-diffusion-2-1_split-einsum_compiled/
│ ├── merges.txt
│ ├── TextEncoder.mlmodelc
│ ├── Unet.mlmodelc
│ ├── VAEDecoder.mlmodelc
│ ├── VAEEncoder.mlmodelc
│ └── vocab.json
├── ...
└── ...
Compatibility
- Apple Silicon (M1 and later) or Intel Mac (high performance CPU & GPU required)
- macOS Ventura 13.1 and later
- Xcode 14.2 (to build)
What’s New
Version 3.1:
- Added option to auto select ML Compute Unit (@vzsg)
- Added support for restoring
jpeg
files (@vzsg) - Changed default model & image folder directory to user's home directory
- Changed import behavior to copy images
- Updated translations
Compatibility
macOS 13.1 or later.
Apple Silicon or Intel Core processor
Screenshots
Download Now