|
|
@@ -0,0 +1,369 @@ |
|
|
Today I wanted to move existing APFS-resident macOS Catalina installation to a new disk. I upgraded my late 2014 Mac Mini with |
|
|
a shiny new 1TB SSD. This took way too many hours of my life I will never get back. Hope this saves some time to you. |
|
|
|
|
|
Good news: |
|
|
1. it is possible to create a DMG image from existing APFS container with macOS Catalina installation including metadata |
|
|
needed for complete restore (the DMG contains OS, OS Data, Preboot, Recovery and VM volumes) |
|
|
2. it is possible to restore this DMG image into empty APFS container and get a bootable copy of the original system |
|
|
|
|
|
This information is relevant for Catalina (I'm currently running macOS 10.15.1). |
|
|
|
|
|
There are major tricks how to make it work, the second one quite recent and third one is its consequence: |
|
|
|
|
|
#### Trick 1 |
|
|
|
|
|
Source of restore operation must be a synthetised disk mounted from the DMG. Not the DMG file itself! |
|
|
|
|
|
#### Trick 2 |
|
|
|
|
|
"[APFS Volume Groups](https://bombich.com/kb/ccc5/working-apfs-volume-groups)", a new feature introduced in Catalina does not |
|
|
play well with current `Disk Utility.app`, so you have to drop to command-line and force using legacy method of restoring the |
|
|
container. |
|
|
|
|
|
#### Trick 3 |
|
|
|
|
|
Source DMG must not contain any APFS Volume with APFS snapshots. |
|
|
|
|
|
--- |
|
|
|
|
|
## Let's create the source DMG |
|
|
|
|
|
This can be done comfortably from `Disk Utility.app`. You must be booted into some other system than your main system. In the |
|
|
left sidebar of `Disk Utility.app` switch to "Show All Devices". And then right-click the whole APFS container containing your |
|
|
system. It should have a name like "container diskN" and none of its volumes should be currently mounted. In the context menu |
|
|
you should see an available option to "Image from container diskN". |
|
|
|
|
|
Wait, hold your horses. Before making image you might consider deleting all existing snapshots in all APFS Volumes in the |
|
|
container. It will save you headaches later. Better finish reading the article first. |
|
|
|
|
|
> Can I boot into RecoveryOS to do this DMG image? |
|
|
|
|
|
No. RecoveryOS resides in one of APFS Volumes in the container you want to image so this is not possible, I think (not tested). |
|
|
|
|
|
> Oh, what to do if I don't have a secondary system? |
|
|
|
|
|
I would recommend using a recent macOS system installed on an external drive and boot from there. |
|
|
Alternatively, you can downsize the existing APFS container, create a new APFS container on your main disk and install a |
|
|
secondary macOS there and boot into it. This is quite easily doable just by clicking in `Disk Utility.app` if you have enough |
|
|
space on the main drive and a macOS installer at hand. |
|
|
|
|
|
## Clone (restore) the source container |
|
|
|
|
|
`Disk Utility.app` seems to be using `asr` command under the hood. You can `man asr` to read the docs, especially the sections |
|
|
about APFS restore. |
|
|
|
|
|
When restoring you specify source and target. `asr` supports different ways how to specify source and target to cover |
|
|
scenarios related to previous filesystems. Source could be a DMG image file path or some existing disk in the form of |
|
|
`/dev/diskN`. |
|
|
|
|
|
One might expect that specifying a DMG image file as source and some existing empty APFS container as target should do the |
|
|
right thing. No! You get cryptic errors. The trick is to: |
|
|
|
|
|
1. first, mount your DMG file => it creates a synthetised disk |
|
|
2. second, figure out which `/dev/diskN` corresponds to the `synthetised` disk created in step 1 |
|
|
3. finally, use this synthetised disk as source for your `asr` restore operation |
|
|
|
|
|
Make sense? |
|
|
|
|
|
Let's be more concrete with my case. My DMG file mounted as `disk9` which created synthetised `disk10`. As a target I created |
|
|
an empty APFS container in `disk0` with single empty APFS volume via `Disk Utility.app`. The synthetised target container had |
|
|
identifier `/dev/disk1`: |
|
|
|
|
|
``` |
|
|
> diskutil list |
|
|
/dev/disk0 (internal, physical): |
|
|
#: TYPE NAME SIZE IDENTIFIER |
|
|
0: GUID_partition_scheme *1.0 TB disk0 |
|
|
1: EFI EFI 209.7 MB disk0s1 |
|
|
2: Apple_APFS Container disk1 1000.0 GB disk0s2 |
|
|
/dev/disk1 (synthesized): |
|
|
#: TYPE NAME SIZE IDENTIFIER |
|
|
0: APFS Container Scheme - +1000.0 GB disk1 |
|
|
Physical Store disk0s2 |
|
|
1: APFS Volume NewEmptyOS |
|
|
... |
|
|
/dev/disk9 (disk image): |
|
|
#: TYPE NAME SIZE IDENTIFIER |
|
|
0: +499.2 GB disk9 |
|
|
/dev/disk10 (synthesized): |
|
|
#: TYPE NAME SIZE IDENTIFIER |
|
|
0: APFS Container Scheme - +499.2 GB disk10 |
|
|
Physical Store disk9 |
|
|
1: APFS Volume MinimeOS - Data 311.1 GB disk10s1 |
|
|
2: APFS Volume Preboot 82.2 MB disk10s2 |
|
|
3: APFS Volume Recovery 528.5 MB disk10s3 |
|
|
4: APFS Volume VM 4.3 GB disk10s4 |
|
|
5: APFS Volume MinimeOS 10.9 GB disk10s5 |
|
|
``` |
|
|
|
|
|
To execute restore I had to use `asr` command from command-line: |
|
|
|
|
|
```bash |
|
|
> sudo asr restore --source /dev/disk10 --target /dev/disk1 |
|
|
... |
|
|
Volume replication failed - Read-only file system |
|
|
``` |
|
|
|
|
|
The `--useInverter` is the tricky option here. Replication functionality which is enabled by default is quite recent and it |
|
|
does not play well with APFS Volume Groups. It fails when trying to do something with read-only system volume: |
|
|
|
|
|
I consider this to be a bug and to be resolved in some future macOS version. Unfortunately `Disk Utilty.app` seem to be using |
|
|
`asr` without the `--useInverter` flag and gives me the same error in the GUI when trying to restore mounted DMG into existing |
|
|
container. |
|
|
|
|
|
So I ran it and got: |
|
|
|
|
|
``` |
|
|
> sudo asr restore --source /dev/disk10 --target /dev/disk1 --useInverter |
|
|
... |
|
|
APFS inverter failed to invert the volume - Invalid argument |
|
|
``` |
|
|
|
|
|
Uh, oh!? Trying to run the restore again with `--verbose` and `--debug` flag gave me a hint: |
|
|
|
|
|
``` |
|
|
> sudo asr restore --source /dev/disk10 --target /dev/disk1 --useInverter --verbose --debug |
|
|
... |
|
|
*** Mounting inner volume (ContainerToInvert)... |
|
|
... |
|
|
mount_inner_volume:884: Inner volume has snapshots |
|
|
|
|
|
APFS inverter failed to invert the volume - Invalid argument |
|
|
``` |
|
|
|
|
|
Ah, now I remember that the man page stated that `--useInverter` does not work with snapshots. |
|
|
The error does not tell me which volume has this issue but sure I can figure this out: |
|
|
|
|
|
``` |
|
|
> diskutil apfs listSnapshots disk10s1 |
|
|
Snapshots for disk10s1 (20 found) |
|
|
| |
|
|
+-- 9801FBE4-E581-4B95-A055-8A94A9C2ABE9 |
|
|
| Name: com.apple.TimeMachine.2019-11-13-000654.local |
|
|
| XID: 2200904 |
|
|
| Purgeable: Yes |
|
|
| NOTE: This snapshot limits the minimum size of APFS Container disk10 |
|
|
| |
|
|
... |
|
|
| |
|
|
+-- 46C315AB-6014-4275-AF94-B284E476718A |
|
|
| Name: com.apple.TimeMachine.2019-11-13-220854.local |
|
|
| XID: 2211035 |
|
|
| Purgeable: Yes |
|
|
| |
|
|
+-- 72FE9F27-A358-42F0-91CE-CBC0E8140B46 |
|
|
Name: com.apple.TimeMachine.2019-11-13-231202.local |
|
|
XID: 2212680 |
|
|
Purgeable: Yes |
|
|
|
|
|
> diskutil apfs listSnapshots disk10s5 |
|
|
Snapshots for disk10s5 (3 found) |
|
|
| |
|
|
+-- B07AE72B-1EE9-4145-827F-CB31E06635BB |
|
|
| Name: com.apple.TimeMachine.2019-11-13-211042.local |
|
|
| XID: 2210200 |
|
|
| Purgeable: Yes |
|
|
| NOTE: This snapshot limits the minimum size of APFS Container disk10 |
|
|
| |
|
|
+-- D124E17B-B061-4143-B69D-6626E5E8B41A |
|
|
| Name: com.apple.TimeMachine.2019-11-13-220854.local |
|
|
| XID: 2211035 |
|
|
| Purgeable: Yes |
|
|
| |
|
|
+-- 1430DB4E-6527-4FA3-B226-77006055CA99 |
|
|
Name: com.apple.TimeMachine.2019-11-13-231202.local |
|
|
XID: 2212676 |
|
|
Purgeable: Yes |
|
|
|
|
|
> diskutil apfs listSnapshots disk10s4 |
|
|
No snapshots for disk10s4 |
|
|
|
|
|
> diskutil apfs listSnapshots disk10s3 |
|
|
No snapshots for disk10s3 |
|
|
|
|
|
> diskutil apfs listSnapshots disk10s2 |
|
|
No snapshots for disk10s2 |
|
|
``` |
|
|
|
|
|
So the Data volume `disk10s1` and OS volume `disk10s5` seem to have some snapshots created by TimeMachine. |
|
|
|
|
|
My task now is to delete them and try to run restore again. The problem is that my `disk10` disks are read-only because they |
|
|
were mounted from read-only DMG. |
|
|
|
|
|
... few hours of banging my head into a wall |
|
|
|
|
|
Ok, I will spare you and won't describe all my trial and errors here. |
|
|
|
|
|
Finally I found a way how to strip all snapshots from `disk10s1`: |
|
|
|
|
|
``` |
|
|
> hdiutil attach -owners on /Volumes/W1/backups/mos.dmg -shadow |
|
|
expected CRC32 $0A545D85 |
|
|
/dev/disk11 |
|
|
/dev/disk12 EF57347C-0000-11AA-AA11-0030654 |
|
|
/dev/disk12s1 41504653-0000-11AA-AA11-0030654 /Volumes/MinimeOS - Data 1 |
|
|
/dev/disk12s2 41504653-0000-11AA-AA11-0030654 /Volumes/Preboot 1 |
|
|
/dev/disk12s3 41504653-0000-11AA-AA11-0030654 /Volumes/Recovery 1 |
|
|
/dev/disk12s4 41504653-0000-11AA-AA11-0030654 /Volumes/VM 1 |
|
|
/dev/disk12s5 41504653-0000-11AA-AA11-0030654 /Volumes/MinimeOS 1 |
|
|
``` |
|
|
|
|
|
The magical `-shadow` option mounts the image as read-write and keeps diff of changes in `/Volumes/W1/backups/mos.dmg.shadow` |
|
|
file. So if you try this at home make sure you have enough space available for the shadow file. Not sure if `-owners on` is |
|
|
important but this worked for me after some previous problems. Also note that an alternative solution would be to use |
|
|
`hdiutil convert -format UDRW ...` and continue with converted read-write image. See the Q/A section below. |
|
|
|
|
|
Anyways, having mounted writable disk allowed me to enumerate all snapshots and delete them: |
|
|
```bash |
|
|
# this is just a dry-run to see all snapshot GUIDs to be deleted |
|
|
> diskutil apfs listsnapshots /dev/disk12s1 | grep "+--" | cut -d" " -f2 |
|
|
9801FBE4-E581-4B95-A055-8A94A9C2ABE9 |
|
|
DA44B969-EBA6-4EFE-ADCA-B4BB43FF6B37 |
|
|
8AEFF1F4-BC35-4C05-A21C-3122A97569B4 |
|
|
0149B1B9-9095-4896-B883-CD976802E419 |
|
|
A6E252E6-2007-4DD5-9313-EE2156C0C813 |
|
|
F3C38242-8AF7-42A8-9EC6-CF6381F647C0 |
|
|
D6391D9B-4ADB-44D4-9F05-C58FABC730AD |
|
|
9281E5D8-867D-423D-A95F-7C4E3B4F34E2 |
|
|
82EA6CA1-FB41-40DC-8F80-B8C34311E491 |
|
|
EEE95161-ABFA-4E99-B893-2887DF4EC317 |
|
|
779FAA27-DACD-41C2-A408-E731DC6E96A1 |
|
|
36E7F342-9E5E-43F4-BA45-0441F9067244 |
|
|
8359CA22-79A6-43DF-A265-35BEB8D4FC7E |
|
|
70105B62-19B0-4F14-9FFE-E73C6E32C74A |
|
|
3C581209-FC4B-4D14-9491-95AE656B3335 |
|
|
068E2DD8-B6CF-454E-A5CC-8E68261EC9A6 |
|
|
422788AF-2A6E-40D6-976D-CC3F93A925CA |
|
|
B647F1C2-E4CF-4F90-8A77-3C1CB2B992C9 |
|
|
``` |
|
|
|
|
|
```bash |
|
|
# delete them all |
|
|
> diskutil apfs listsnapshots /dev/disk12s1 | grep "+--" | cut -d" " -f2 | xargs -I{} sudo diskutil apfs deleteSnapshot disk12s1 -uuid {} |
|
|
Deleting APFS Snapshot DA44B969-EBA6-4EFE-ADCA-B4BB43FF6B37 "com.apple.TimeMachine.2019-11-13-011547.local" from APFS Volume disk10s1 |
|
|
Started APFS operation |
|
|
Finished APFS operation |
|
|
Deleting APFS Snapshot 8AEFF1F4-BC35-4C05-A21C-3122A97569B4 "com.apple.TimeMachine.2019-11-13-030651.local" from APFS Volume disk10s1 |
|
|
Started APFS operation |
|
|
Finished APFS operation |
|
|
... |
|
|
Deleting APFS Snapshot B647F1C2-E4CF-4F90-8A77-3C1CB2B992C9 "com.apple.TimeMachine.2019-11-13-211042.local" from APFS Volume disk10s1 |
|
|
Started APFS operation |
|
|
Finished APFS operation |
|
|
``` |
|
|
|
|
|
To double-check the result, I ran: |
|
|
```bash |
|
|
> diskutil apfs listsnapshots /dev/disk12s1 |
|
|
No snapshots for disk12s1 |
|
|
``` |
|
|
|
|
|
I repeated the same snapshot purging with `disk12s5` and confirmed: |
|
|
```bash |
|
|
> diskutil apfs listsnapshots /dev/disk12s5 |
|
|
No snapshots for disk12s5 |
|
|
``` |
|
|
|
|
|
Ok, hopefully there are no snapshots in any volume in whole container and I can run the restore again using the massaged `disk12`: |
|
|
``` |
|
|
> sudo asr restore --source /dev/disk10 --target /dev/disk1 --useInverter |
|
|
Validating target...done |
|
|
Validating source...done |
|
|
Validating sizes...done |
|
|
Restoring ....10....20....30....40....50....60....70....80....90....100 |
|
|
Verifying ....10....20....30....40....50....60....70....80....90....100 |
|
|
Inverting target volume...done |
|
|
Restoring ....10....20....30....40....50....60....70....80....90....100 |
|
|
Verifying ....10....20....30....40....50....60....70....80....90....100 |
|
|
Inverting target volume...done |
|
|
Restored target device is /dev/disk1s2. |
|
|
``` |
|
|
|
|
|
The disk is bootable and seems to be 1:1 copy of the original. Sure not really a device-level block-by-block copy, but good |
|
|
enough. |
|
|
|
|
|
## It was easy, any questions? |
|
|
|
|
|
> How do I mount the DMG? |
|
|
|
|
|
#### Case 1: if your DMG does not contain any snapshots: |
|
|
|
|
|
##### Option 1: via Finder.app |
|
|
|
|
|
Simply double-click the DMG file and it should mount. Then use `diskutil list` or `Disk Utility.app` to figure out which |
|
|
synthetised disk it was mounted as. It should have "APFS Container Scheme". |
|
|
|
|
|
``` |
|
|
/dev/disk9 (disk image): |
|
|
#: TYPE NAME SIZE IDENTIFIER |
|
|
0: +499.2 GB disk9 |
|
|
/dev/disk10 (synthesized): |
|
|
#: TYPE NAME SIZE IDENTIFIER |
|
|
0: APFS Container Scheme - +499.2 GB disk10 |
|
|
Physical Store disk9 |
|
|
1: APFS Volume MinimeOS - Data 311.1 GB disk10s1 |
|
|
2: APFS Volume Preboot 82.2 MB disk10s2 |
|
|
3: APFS Volume Recovery 528.5 MB disk10s3 |
|
|
4: APFS Volume VM 4.3 GB disk10s4 |
|
|
5: APFS Volume MinimeOS 10.9 GB disk10s5 |
|
|
``` |
|
|
|
|
|
##### Option 2: via command-line |
|
|
|
|
|
``` |
|
|
> hdiutil attach /Volumes/W1/backups/mos.dmg |
|
|
Checksumming whole disk (Apple_APFS : 0)… |
|
|
.................................................................................................................... |
|
|
whole disk (Apple_APFS : 0): verified CRC32 $09A8F547 |
|
|
verified CRC32 $0A545D85 |
|
|
/dev/disk10 EF57347C-0000-11AA-AA11-0030654 |
|
|
/dev/disk10s1 41504653-0000-11AA-AA11-0030654 /Volumes/MinimeOS - Data |
|
|
/dev/disk10s2 41504653-0000-11AA-AA11-0030654 |
|
|
/dev/disk10s3 41504653-0000-11AA-AA11-0030654 |
|
|
/dev/disk10s4 41504653-0000-11AA-AA11-0030654 |
|
|
/dev/disk10s5 41504653-0000-11AA-AA11-0030654 /Volumes/MinimeOS |
|
|
/dev/disk9 |
|
|
``` |
|
|
|
|
|
#### Case 2: if your DMG has some snapshots, you must mount it as read-write with a shadow file |
|
|
|
|
|
``` |
|
|
> hdiutil attach -owners on /Volumes/W1/backups/mos.dmg -shadow |
|
|
expected CRC32 $0A545D85 |
|
|
/dev/disk11 |
|
|
/dev/disk12 EF57347C-0000-11AA-AA11-0030654 |
|
|
/dev/disk12s1 41504653-0000-11AA-AA11-0030654 /Volumes/MinimeOS - Data |
|
|
/dev/disk12s2 41504653-0000-11AA-AA11-0030654 /Volumes/Preboot |
|
|
/dev/disk12s3 41504653-0000-11AA-AA11-0030654 /Volumes/Recovery |
|
|
/dev/disk12s4 41504653-0000-11AA-AA11-0030654 /Volumes/VM |
|
|
/dev/disk12s5 41504653-0000-11AA-AA11-0030654 /Volumes/MinimeOS |
|
|
``` |
|
|
|
|
|
> How to convert read-only DMG to writable DMG? |
|
|
|
|
|
This is what worked for me: |
|
|
``` |
|
|
> hdiutil convert /Volumes/W1/backups/mos.dmg -format UDRW -o /Volumes/W1/backups/mos-rw.dmg |
|
|
nx_kernel_mount:1387: : reloading after unclean unmount, checkpoint xid 2213158, superblock xid 2213151 |
|
|
Reading whole disk (Apple_APFS : 0)… |
|
|
.................................................................................................... |
|
|
Elapsed Time: 3h 14m 56.431s |
|
|
Speed: 40.7Mbytes/sec |
|
|
Savings: 0.0% |
|
|
created: /Volumes/W1/backups/mos-rw.dmg |
|
|
``` |
|
|
|
|
|
You can then follow case 1 to mount `/Volumes/W1/backups/mos-rw.dmg` instead of `/Volumes/W1/backups/mos.dmg`. |
|
|
|
|
|
As you can see from the output this conversion can take quite some time if the file is on external/slower disk, so it is |
|
|
preferable to use `-shadow` mounting as described in the article. |
|
|
|
|
|
> I've launched hdiutil/asr and it does seem to be stuck, what now? |
|
|
|
|
|
With larger images/disks it might take up to several hours for operation to complete. The progress indicator in terminal |
|
|
output is not granular enough to display any action for several minutes. I would recommend to open `Activity Monito.App` |
|
|
switch to `Disk` view and observe most busy reader/writer processes. This should give you sense if your command is doing any |
|
|
work or is slacking. It could give you also rough lower bound estimate how far it is in progress because it usually needs to |
|
|
read or write its source or target data at least once. So if you have 300GB DMG and see reading speed 1GB/s you can get an |
|
|
idea if you can take a nap or not. Also better disable power-saving because your Mac could hibernate unexpectedly while |
|
|
working. |